Computational and Mathematical Methods in Medicine

Computational and Mathematical Methods in Medicine / 2021 / Article
Special Issue

Computational Intelligence for Health Care

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5588385 | https://doi.org/10.1155/2021/5588385

Maoxian Zhao, Yue Qin, "Feature Selection on Elite Hybrid Binary Cuckoo Search in Binary Label Classification", Computational and Mathematical Methods in Medicine, vol. 2021, Article ID 5588385, 13 pages, 2021. https://doi.org/10.1155/2021/5588385

Feature Selection on Elite Hybrid Binary Cuckoo Search in Binary Label Classification

Academic Editor: Waqas Haider Bangyal
Received19 Feb 2021
Accepted22 Apr 2021
Published12 May 2021

Abstract

For the low optimization accuracy of the cuckoo search algorithm, a new search algorithm, the Elite Hybrid Binary Cuckoo Search (EHBCS) algorithm, is improved by feature weighting and elite strategy. The EHBCS algorithm has been designed for feature selection on a series of binary classification datasets, including low-dimensional and high-dimensional samples by SVM classifier. The experimental results show that the EHBCS algorithm achieves better classification performances compared with binary genetic algorithm and binary particle swarm optimization algorithm. Besides, we explain its superiority in terms of standard deviation, sensitivity, specificity, precision, and -measure.

1. Introduction

Feature selection attempts to find the most discriminative subset of features to bring reasonable recognition rates for some classifiers. Given a problem with features, we have possible solutions, making an exhaustive search impracticable for high-dimensional feature spaces. In addition, the high-dimensional data also contains a large number of irrelevant and noise-polluted features, and there is often information redundancy between features. These factors will affect the learning effect of the learning algorithm and significantly increase the algorithm’s computational complexity. Therefore, feature selection has become a research hot spot.

As a key technology of pattern recognition and machine learning, feature selection is an effective method to deal with high-dimensional data. Feature selection models can be divided into three categories [1]: filter [2], embedding [3], and wrapper [4]. Filter methods define the relevant features without prior classification of the data. The embedding method refers to the process of embedding the feature selection algorithm into the classification algorithm and conducting feature selection and training at the same time. Wrapper methods on the other hand incorporate classification algorithms to search for and select relevant features. The wrapper methods generally outperform filter methods in terms of classification accuracy [5]. Recent studies have shown that feature selection can better solve many practical problems, including classification and medical problems [69].

Another vital part of the feature selection process is the search strategy: selecting the feature subset that meets the optimal evaluation criteria, which is usually a combinatorial optimization problem. In recent years, metaheuristic algorithms based on biological behavior and physical systems in nature are proposed to solve the optimization problems [10]. Metaheuristic optimization algorithm, also known as natural heuristic algorithm, studies the evolutionary behavior of species and simulates it into computer science algorithms, including genetic algorithm [11], particle swarm optimization algorithm [12], bat algorithm [13, 14], and cuckoo algorithm [15]. The metaheuristic optimization algorithm has achieved good results in feature selection. For example, Liu et al. [16] combined genetic algorithm and simulated annealing algorithm to select feature subsets. The experiment result expresses the hybrid algorithm has high reliability and strong convergence. On the contrary, Siedlecki and Sklansky [17] combined genetic algorithm and feature selection to achieve a certain effect, but it exposed the problem of premature convergence of genetic algorithm. Kennedy and Eberhart [18] proposed the binary particle swarm optimization algorithm called BPSO, which modified the traditional particle swarm optimization algorithm and solves the binary optimization problems. Besides, Firpi and Goodman [19] applied BPSO to feature selection problems.

The success of metaheuristic methods lies in the efficiency of search strategies and its ability to find solutions to combinatorial optimization problems. Metaheuristics take the information gathered during the search to guide the search process, and therefore, they are considered independent of the problems. The cuckoo search algorithm is a novel heuristic optimization approach introduced by Yang and Deb in 2009 [15]. The algorithm simulates cuckoo birds’ parasitic breeding habits and is a random algorithm with strong global search ability. The cuckoo search algorithm has been efficiently employed in many fields, such as intelligent optimization and calculation. Cuckoo search is superior to other algorithms in continuous optimization problems including spring design and welding beam in engineering design applications [20]. This algorithm is especially suitable for large-scale problems [21]. Valian et al. have applied it in training the neural network [22] and spike neural model [23]. The experiment proved that CS has better search capability than other algorithms like particle swarm optimization algorithm, genetic algorithm, and artificial bee colony algorithm [21, 24, 25]. Therefore, CS is a metaheuristic algorithm used in combinatorial optimization problems to obtain higher performance.

The CS can only solve optimization problems in the continuous solution space. To solve combinatorial optimization problems in discrete solution space, Gherboudj et al. [26] proposed a binary version of the cuckoo search algorithm, namely, BCS algorithm. Pereira and Rodrigues [27] applied BCS algorithm to feature selection. Bhattacharjee and Sarmah [28] improved BCS by using the balance combination of local random walk and global exploration random walk so that BCS algorithm can better balance locality and globality. Sudha and Selvarajan [29] presented a feature selection approach based on an enhanced cuckoo algorithm and applied it to breast X-ray images. It can supply valuable information for clinicopathologists. Aziz and Hassanien [30] proposed a new improved cuckoo algorithm combined with the theoretical knowledge of rough set and finally applied it to feature selection.

The cuckoo search algorithm uses Lévy flight random walk to search space in the iteration. The cuckoo search cannot effectively search around the cuckoo’s nest due to the Lévy flight with sharp 90-degree turns. Therefore, it suffers from low optimization accuracy [31]. In order to improve the cuckoo search algorithm, this paper proposes an Elite Hybrid Binary Cuckoo Search algorithm, and the novelty of the paper is two-fold: (1)EHBCS adopts feature weighting and elite strategy in the binary cuckoo search algorithm. Feature weighting based on Relief algorithm is to estimate the feature weight and its importance according to the ability of each feature to distinguish different class instances. Elite strategy and genetic algorithm with the selection and crossover operators are embedded into the cuckoo algorithm so that the well-positioned nests can be inherited to the next generation(2)EHBCS is applied to a set of binary label datasets, including low-dimensional and high-dimensional samples such that only the best features are retained in the subset. Experimental results demonstrate that EHBCS achieves a better classification performance to minimize the number of selected features, simultaneously maximizing the classification accuracy by SVM compared with binary genetic algorithm and binary particle swarm optimization

The main contributions of this paper are summarized as follows: (1) It is the first time to combine the feature weighting and elite strategy with BCS algorithm. (2) It specifically improves the low optimization accuracy of the BCS algorithm. (3) It may provide a useful revelation to high-dimensional data researches such as text processing, medical research, and gene analysis.

The structure of this paper is as follows: Section 2 provides details of the classical version of the Cuckoo Search and Binary Cuckoo Search algorithms; Section 3 presents the Elite Hybrid Binary Cuckoo Search (EHBCS) algorithm; Section 4 discusses the experimental methodology and in particular the dataset and evaluation measures; numerical experiment is also carried out to evaluate the prediction performance of our method in Section 5. The results demonstrate that the proposed method is efficient for high-dimensional datasets; finally, the conclusions of our work are given in Section 6.

2. Cuckoo Search Algorithm

2.1. Cuckoo Search (CS) Algorithm

The parasite behavior of cuckoos is extremely intriguing. These birds can lay down their eggs in host nests and mimic external characteristics of host eggs such as color and spots. If this strategy is unsuccessful, the host can throw the cuckoo’s eggs away or simply abandon its nest, making a new one in another place. Based on this context, Yang and Deb [15] have developed a novel evolutionary optimization algorithm named cuckoo search (CS), and they have summarized CS using three rules, as follows: (1)Each cuckoo chooses a nest to lays eggs randomly(2)The number of available host nests is fixed, and nests with high-quality eggs will be passed on the next generations(3)If a host bird discovered the cuckoo egg, it can throw the egg away or abandon the nest and build a completely new nest

For optimization problems, each nest represents a possible solution to the problems, and a nest can contain one or more eggs depending on the size of the problems. Firstly, the algorithm randomly initializes each nest, and then, the algorithm carries out an iterative process. During each iteration, each nest is updated by Lévy flight with random walk, and the formula is shown in Equations (1) and (2):

The updating formula of each dimension is expressed as

where denotes th nest and stands for the th eggs at nest for the generation. is step size, and the product means entrywise multiplications. In most case, we can use . The Lévy flights Lévy () employ a random step length, and Lévy() is its th component.

In the 1930s, Lévy proposed Lévy’s distribution, believing that the relationship between the continuous jump path of Lévy’s flight and time follows Lévy’s distribution. Later, many scholars have studied Lévy’s distribution and used it to explain random phenomena in nature, such as Brownian motion and random walk. Yang [15] studied and obtained the probability density function of Lévy distribution in power form by simplifying and Fourier transform:

where is the power coefficient. Equation (2) is a probability distribution with a heavy tail. Although it can essentially describe the random walk process of cuckoo birds, it has not been further described in a more concise and easy to program mathematical language to achieve the CS algorithm. So Yang adopted the Mantegna algorithm to simulate Lévy jump path:

where is the Lévy flight Lévy(), the relation of parameters in equation (2) is and content 02. The parameter is , and and are random number and satisfy Equations (5) and (6):

Let then step is the path that cuckoo bird experiences each time in solution space when it randomly searches for the new nest location from the old nest location according to Equation (2). In the finally step of each iteration, the nest with the worst quality is substituted with probability p[0,1]. Algorithm 1 shows the pseudo-code for the classical version of CS.

Objective function f(x)
Generate initial population of n host nest ,(i=1,2,,n)
while () or (stop criterion)do
 Get a cuckoo randomly by Levy flights evaluate its quality/fitness
 Choose a nest among n (say, j);
if () then
  replace j by new solution;
end
 A fraction () of worse nests are abandoned and new ones are built
 Keep the best solution (or nests with quality solutions)
 Rank the solutions and find the current best
end
Postprocess results and visualization
2.2. Binary Cuckoo Search (BCS) Algorithm

In traditional CS, the position of the solution is updated in the continuous search space. Unlike the above CS, the BCS search space for feature selection is modeled as a binary -bit string, where is the number of features. BCS represents each nest as a binary vector, where each 1 corresponds to a selected feature and 0 otherwise. This means each nest represents a possible solution, and each nest represents a feature.

The original cuckoo algorithm introduces mapping functions to extend the cuckoo algorithm to discrete binary regions as follows [25]: in which and denotes the new egg’s value at iteration .

3. Elite Hybrid Binary Cuckoo Search (EHBCS) Algorithm

3.1. Feature Weighting Based on Relief Algorithm

The core idea of feature weighting based on Relief is to estimate the feature weight and its importance according to the ability of each feature to distinguish different class instances [32]. Given a two-class dataset , containing cases is a class label set, is a case in , and is a real-valued vector with dimension . Relief performs the following iterative learning: randomly select a case , then find the nearest case of the same class and the nearest case of the different class, and then update the weight using the following rules: where represents the weight of the th feature and represents the maximum number of iterations. is used to calculate the difference between the th dimensional eigenvalues of two instances, that is, the absolute value vector of the feature difference vector.

A variant that considers neighbors has been developed from the nearest neighbor Relief, whose weight value update formula is where is the set of nearest neighbors of in by Euclidean distance. Process is shown in Algorithm 2.

Input: binary label dataset D with n cases and d dimensions, Maxiter T
Output: weight vector w
fordo
end
whiledo
 Randomly select an case from the dataset and calculate the distance between nearest cases of the same kind and nearest cases of the different kind ;
fordo
   generated by formula (11);
  ;
end
end
3.2. Selection and Crossover Operator

The selection operator is to inherit the individuals with high fitness in the current population to the next generation according to selection probability. Generally, individuals with high fitness will have more chances to inherit to the next generation. This paper uses the roulette model to select individuals. The calculation formula is as follows:

where is the selection probability, is the cumulative probability, is the individual fitness function value, and is the number of the group. Select operator process is in Algorithm 3.

Crossover is to cross the selected a pair of individuals according to probability, such as single-point crossover or multipoint crossover. In this paper, the single-point crossover is adopted, that is, the random number is generated within the range of individual coding bits as the crossover point, and then, the coding exchange of the two bodies from this point to the end is carried out, so that the crossover process can be completed.

3.3. Weight-Based Elite Hybrid Binary Cuckoo Search (EHBCS) Algorithm

In the CS algorithm, the Lévy flight is used to explore the search space using a straight flight path with a sudden 90-degree turn, and Figure 1 simulates Lévy’s flight path. In addition, the CS algorithm is highly dependent on random walk search, which can be easily moved from one area to another without carefully exploring each nest. Therefore, the CS algorithm has weak local search ability and low optimization accuracy [31]. In order to cover the mentioned weakness of the CS, elite strategy and genetic algorithm operators are embedded into the cuckoo algorithm, such as selection and crossover operators, so that the well-positioned nests can be inherited to the next generation. The so-called elitist strategy is to preserve the nest in a good location so as not to miss the optimal nest during the algorithm iterations by Lévy flight. According to certain rules, the selection operator is to inherit the individuals with high fitness in the current population to the next generation. Generally, individuals with high fitness will have more chances to inherit to the next generation. The crossover operator usually inputs two individuals as candidate solutions with a certain probability and generates neighborhood solutions by exchanging part of the chromosomes of two individuals.

The CS algorithm is suitable for continuous domain problems, and the feature selection is a binary discrete problem. This paper proposes an Elite Hybrid Binary Cuckoo Search (EHBCS) algorithm considering these facts. The EHBCS algorithm weights the features firstly according to the Relief algorithm mentioned in part III-A, so that the features with larger weights have greater opportunities to be selected. Then, in each iteration of the EHBCS algorithm, the optimal nest does not carry out Lévy flight or crossover to avoid damaging the optimal nest position. The nest generated by Lévy flight is operated by selection and crossover operators.

Since the existing BCS algorithm does not consider the influence brought by the function, the coefficient in the function is changed to the feature weight in this paper so that features with significant feature weight have a greater chance to be selected and the improved algorithm can finish the iterative process faster. The BCS mapping function is modified as follows:

When

When

The function of does not represent the probability of change, and it represents the probability of a certain change being 1. Let . The corresponding function graph is shown in Figure 2. It can be seen from the figure that the greater the parameter of the same abscissa, the greater the corresponding value. That is, the greater the feature weight, the greater the probability of being selected.

It should be emphasized that the weights calculated by the Relief algorithm may have negative weights, and the negative weight indicates that the distance of the similar neighbor samples is larger than that of the nonsimilar neighbor samples. Therefore, it is considered that this feature is unfavorable to classification, and the probability of selecting this feature in the corresponding feature selection is low.

Because the purpose of nest discovery and crossover operation is to make the population various, this paper adopts crossover operation instead of discovery operation. In the late iteration of the algorithm, the elite strategy proposed in this paper ensures the convergence. The elite selection and crossover operators as well as the pseudo-code of the algorithm presented in this paper are as follows: Algorithm 3 and Algorithm 4.

Input: population with nests, number of dimensions (features) , crossover rate , fitness function
Output: New population after elite selection and crossover
fordo
 p()=, ;
fordo
 Generate a random number from [0,1];
if () then
 Select the ;
else
ifthen
 Select the ;
   end
  end
end
end
Train the classifier to evaluate accuracy of ;
Calculate the fitness function value and store it in ;
;
;
;
Two nests in the population are paired at random for each pair except , such as and
if () then
Generate a random integer r in (1,d) with one-point crossover between two individuals ;
;
end
The crossed nests and Bestness form a new population as output
Input: labelled dataset , Maxiter , CS parameters value, number of nests , number of dimensions (features)
Output:
fordo
randomly generate a binary 0-1 string;
Train the classifier to evaluate accuracy of ;
Calculate the fitness function value and store it in ;
end
;
;
;
while or (stop criterion)do
fordo
  fordo
newnest generated by formula (14)–(17) and store it in ;
end
end
A new population with n members of and ;
Train the classifier to evaluate accuracy of ;
Calculate the fitness function value and store it in ;
fordo
if () then
;
;
end
end
Generate new population after elite selection and crossover;
Train the classifier to evaluate accuracy of ;
Calculate the fitness function value and store it in ;
;
;
;
end

4. Experimental Methodology

4.1. Datasets

Eight datasets were extracted from the UCI Machine Learning Repository [3335]. In order to make a more comprehensive comparison between the proposed algorithm and other algorithms, four low-dimensional feature datasets and four high-dimensional feature datasets are selected. Each dataset has two classes, and Table 1 provides the datasets’ names, the total number of features, total number of cases, and classification accuracy before feature selection.


DatasetsFeaturesCasesAccuracy

Cervical Cancer Behavior Risk19720.865
Breast Cancer Wisconsin (diagnostic)305690.627
Breast Cancer Wisconsin (prognosis)331980.763
Sonar602080.702
Colon Tumor2000620.853
Medulloblastomas5893340.648
Central Nervous System7129600.600
Relation Leukemia7129720.919

4.2. Performance Evaluation Measures

Generalization ability is the ability of a model to predict new data accurately after training on the training datasets. Cross-validation is a method to evaluate model generalization ability, which is widely used in data mining and machine learning [36]. In cross-validation, the dataset is usually divided into two parts: the training set, which is used to build a prediction model, and the other is test set, which is used to test the model’s generalization ability. Cross-validation was performed, and the value of was set to for datasets with cases below 100 and to for datasets with cases above 100. The evaluation indicators used include Accuracy, Sensitivity, Precision, and F-measure [37].

whereis the total number of positive cases and correctly identified as positive,is the total number of negative cases and correctly identified as negative,is the total number of negative cases and wrongly identified positive cases, andis the total number of positive cases and wrongly identified negative cases.

For the overall classification performance of each algorithm, we calculate the average value of all tests as follows: where is the total number folds.

4.3. Evaluating Classification Performance

The support vector machine (SVM) classifier was adopted to evaluate the accuracy of feature subset classification. SVM is a supervised machine learning algorithm introduced by Boser et al. [38], in which data is mapped as the points in an -dimensional feature space (). The final output of SVM is an optimal hyperplane that classifies new cases.

SVM highly depends on kernel functions, so the experiments with different kernel functions are fundamental. The kernel function is a similarity function, which determines the similarity between any two inputs by calculating the distance between them. It is not difficult to determine the kernel function. Any function that satisfies the Mercer theorem can be used as a kernel function. There are various types of kernel functions such as linear kernel function, polynomial kernel function, radial basis kernel function, Sigmoid kernel function, and composite kernel function. Selecting the appropriate kernel function is relevant to the datasets and the problems. Therefore, it is often selected experimentally. Based on experiments, suitable kernel functions are selected to evaluate the datasets. The selected kernel functions are presented in Table 2.


DatasetKernel function

Cervical Cancer Behavior RiskRadial basis function
Breast Cancer Wisconsin (diagnostic)Radial basis function
Breast Cancer Wisconsin (prognosis)Radial basis function
SonarRadial basis function
Colon TumorLinear function
MedulloblastomasLinear function
Central Nervous SystemLinear function
Relation LeukemiaLinear function

4.4. Fitness Function

The main objective of the feature selection task is to find a subset of features from the dataset so that the learning algorithm can use these selected features to achieve as high accuracy as possible.

In the classification problems, two feature subsets with different numbers likely have the same classification accuracy for the same dataset. Therefore, in the case of the same classification accuracy, if the metaheuristic algorithm finds the subset with more features earlier, the subset with fewer features will be ignored. In this paper, a new evaluation method is proposed as the fitness function to overcome this constraint, which considers the classification accuracy and takes the rate of feature reduction as an adjusting term.

Let be the total number of features contained in the datasets, be the number of features selected by metaheuristic optimization algorithms, be the weight of rate of feature reduction, and 1- be the weight of average accuracy. The value of the adaptation fitness function can be calculated as shown in (28). We set =0.2.

4.5. Parameter Setting

The performance of the proposed EHBCS is compared against the Binary Genetic Algorithm (BGA) and Binary Particle Swarm Optimization (BPSO) algorithms. Table 3 lists the parameter values for each algorithm. The population size of all optimization algorithms is set to 30, and each algorithm was run 5 times to perform the feature selection task. All runs are executed in Matlab 2017, running on a Windows 10 operating system on a Huawei MagicBook with Intel(R) Core(TM) i5-8250U 1.6GHz with 8Gb of RAM.


AlgorithmsParameters

EHBCS
BGA
BPSO

4.6. Analysis of Computational Complexity

The EHBCS algorithm uses the Relief algorithm and the binary conversion of Lévy flight as well as the selection and crossover process. For the Relief algorithm, assuming that the number of runs is , the number of iterations is , the number of cases is , and the individual dimension is ; the complexity of the algorithm is . For Lévy flight and binary conversion, assuming that the number of individuals is , the individual dimension is , and the number of iterations is ; the computational complexity is . For selection and crossover, assuming the number of individuals is , the computational complexity is . Therefore, the computational complexity is for EHBCS algorithm.

5. Experimental Results

Figures 3 and 4 provide the performance of all optimization algorithms for feature selection using the medical datasets described in Section 4.1. They contain the following information:

Accuracy: classification accuracy for each datasets

All: classification accuracy before feature selection for each dataset

SR: size reduction percentage is used to evaluate the percentage of removed features compared to all available features

Tables 4 and 5 provide the performance of all optimization algorithms for feature selection using binary label datasets described in Section 4.1. Each table column contains the following information:


FitnessAlgorithmAvgaccMaxMinStdAvgNSESPPreF1Dataset

EHBCS0.9370.9200.9480.0116.0000.8400.9660.8950.854Cervical Cancer Behavior Risk
BGA0.9540.9750.9340.0155.0000.9170.9520.9060.893
BPSO0.9250.9460.8960.0144.4000.7000.9780.9670.775
Avg0.9370.9470.9260.0135.1330.8190.9650.9080.841
accEHBCS0.9240.9320.9060.0116.6000.7850.9600.8210.793
BGA0.9600.9730.9450.0095.6000.8830.9810.9430.896
BPSO0.9200.9480.8960.0175.3000.7530.9170.8000.754
Avg0.9350.9510.9160.0125.8330.8070.9530.8540.814
EHBCS0.9570.9610.9530.0036.0000.9470.9600.9360.940Breast Cancer Wisconsin (diagnostic)
BGA0.9680.9750.9600.0053.1000.9700.9550.9260.947
BPSO0.9620.9720.9540.0065.6000.9540.9410.9070.930
Avg0.9620.9690.9560.0054.9000.9530.9520.9230.939
accEHBCS0.9660.9680.9630.0018.2000.9570.9690.9500.952
BGA0.9730.9770.9680.0037.9000.9540.9590.9400.952
BPSO0.9110.9350.8790.02016.4000.9570.9610.9370.946
Avg0.9500.9600.9370.00810.8330.9560.9630.9420.948
EHBCS0.7870.7930.7780.00710.2000.2940.9260.6040.371Breast Cancer Wisconsin (prognosis)
BGA0.8310.8480.8080.0109.5000.2000.9580.5930.277
BPSO0.8140.8250.7930.01410.8000.2160.9360.5830.286
Avg0.8110.8220.7930.01010.1670.2360.9400.5930.311
accEHBCS0.7970.8030.7930.00513.8000.2200.9740.7500.322
BGA0.8220.8630.7880.02013.5000.2100.9810.7350.302
BPSO0.8060.8290.7930.01015.2000.1910.9680.7330.289
Avg0.8080.8320.7910.01214.1670.2070.9740.7390.304
EHBCS0.7550.7780.7350.01715.2000.9470.9600.9360.940Sonar
BGA0.8160.8800.7780.03611.0000.9700.9550.9260.947
BPSO0.6370.6630.6100.01825.2000.9540.9410.9070.930
Avg0.7360.7740.7080.02417.1330.9520.5020.7040.789
accEHBCS0.7520.7730.7300.01416.6000.9570.9690.9500.952
BGA0.8000.8650.7600.03513.4000.9540.9590.9400.945
BPSO0.6310.6440.6200.00927.6000.9670.9610.9370.946
Avg0.7280.7610.7030.01919.2000.9740.4510.6840.788


FitnessAlgorithmAvgaccMaxMinStdAvgNSESPPreF1Dataset

EHBCS0.8990.9010.8860.006931.8330.9270.8600.9380.922Colon Tumor
BGA0.8930.9010.8850.008905.0000.9270.8600.9250.922
BPSO0.8770.8860.8680.008984.3000.9270.7530.8880.901
Avg0.8890.8960.8790.007933.7440.9270.8240.9170.915
accEHBCS0.8980.9030.8860.0061103.8000.9270.8600.9250.922
BGA0.8930.9030.8860.0111033.0000.9270.8600.9050.901
BPSO0.8840.9010.8690.0091632.1000.9270.7930.9100.913
Avg0.8920.9020.8810.0071256.3000.9270.8380.9130.912
EHBCS0.8760.8760.87602776.2000.9200.6000.9310.916Medulloblastomas
BGA0.8070.8420.7830.0222798.6000.9200.6000.9310.917
BPSO0.7580.7950.7330.0192927.8000.9100.6000.9210.912
Avg0.8140.8380.7970.0142834.2000.9170.6000.9280.915
accEHBCS0.8620.8760.8480.0142899.8000.9100.5500.9150.907
BGA0.7980.8170.7830.0152960.1000.9000.6000.9310.941
BPSO0.7640.7670.7620.0023323.7500.9000.6000.8980.894
Avg0.8080.8200.7980.0103061.2170.9030.5830.9150.914
EHBCS0.7400.7500.7330.0083315.1000.4360.8940.7670.525Central Nervous System
BGA0.6720.7000.6500.0173425.3000.3600.7940.4300.380
BPSO0.6400.6830.6170.0243569.1000.3600.7370.3960.360
Avg0.6840.7110.6670.0163436.5000.3850.8080.5320.422
accEHBCS0.7400.7670.7170.0173693.6000.5000.8940.7870.572
BGA0.6700.7170.6330.0233561.4000.3600.7860.7060.369
BPSO0.6430.6500.6330.0084844.2000.3200.7370.3770.332
Avg0.6840.7110.6610.0164033.0670.3930.8060.6230.425
EHBCS0.9710.9730.9600.0053482.00010.9300.9530.975Relation Leukemia
BGA0.9700.9730.9600.0063401.40010.8850.9220.955
BPSO0.9520.9600.9470.0073563.10010.9100.9370.965
Avg0.9620.9690.9560.0063482.16710.9080.9370.965
accEHBCS0.9730.9870.9600.0064008.50010.9260.9510.973
BGA0.9710.9730.9600.0053573.10010.9100.9370.965
BPSO0.9550.9600.9470.0075661.20010.9100.9370.965
Avg0.9670.9730.9560.0064414.26710.9150.9420.968

Fitness: is accuracy as defined in Section 4.2 Function (23), and is the proposed Function (28) as defined in Section 4.4

Algorithm: it provides the abbreviations of the algorithms, Elite Hybrid Binary Cuckoo Search (EHBCS), Binary Genetic Algorithm (BGA), and Binary Particle Swarm Optimization (BPSO)

Avgacc, Max, Min: average accuracy, maximum accuracy, minimum accuracy of an algorithm during the 5 runs

Std: standard deviation of classification accuracy

AvgN: average number of features returned by the algorithm during the 5 runs

SE, SP, Pre, F1: average sensitivity, specificity, precision, -measure of an algorithm during the 5 runs

Dataset: the dataset used for experimentation as described in Table 1

Avg: average of all corresponding data obtained by the three algorithms

The experimental results show that the average feature subsets are smaller for all datasets, and the average classification accuracy is improved to different degrees. Compared with the original datasets, the number of the average feature subsets after feature selection by the optimization algorithms was reduced by about 18.395%-89.667%, and the average classification accuracy was improved by about 3.3%-34.6%. For the Breast Cancer Wisconsin (diagnostic) dataset, the maximum average classification accuracy improvement was achieved at 34.6%. All these imply that the feature selection methods based on metaheuristic optimization algorithms can effectively eliminate redundant features and significantly improve the classification accuracy especially for some datasets.

For low-dimensional datasets, such as Cervical Cancer Behavior Risk, Breast Cancer Wisconsin (diagnostic), Breast Cancer Wisconsin (prognosis), and Sonar, the EHBCS algorithm can effectively reduce features to obtain a smaller subset of target features. It can get minimum standard deviation in three algorithms, which shows the EHBCS algorithm is the most stable of three. But it is the second of the three optimization algorithms in terms of classification accuracy, SE, SP, Pre, and F1. Compared with the data corresponding to Avg, the EHBCS algorithm has minimum standard deviation, higher classification accuracy, SE, SP, Pre, and F1 in entirety. Compared with the original dataset classification, the number of subset features after feature selection by the EHBCS algorithm is reduced by 58.182%-80%, and the classification accuracy is improved by 5%-33.9%. The results show that the EHBCS algorithm can efficiently diminish the number of features to ensure accuracy, but it did not perform well in low-dimensional datasets.

For high-dimensional datasets, such as Colon Tumor, Medulloblastomas, Central Nervous System and Relation Leukemia, the average classification accuracy, standard deviation, SE, SP, Pre, and F1 obtained by the EHBCS algorithm were superior to BGA and BPSO on the whole. Compared with the data corresponding to Avg, the average classification accuracy of the EHBCS algorithm is improved by 1%-10.6%, and the EHBCS gets lower standard deviation. But it needs to be explained that the standard deviation of the EHBCS algorithm is greater than the data corresponding to Avg when adopting fitness (Function (23)) for dataset Medulloblastomas and Central Nervous System. In addition to these, SE, SP, Pre, and F1 are optimal overall. Compared with the original dataset classification, the number of subset features after feature selection by the EHBCS algorithm is reduced by 43.772%-53.498%, and the classification accuracy is improved by 4.5%-22.8%. The results show that the feature selection method based on EHBCS has higher classification accuracy, SE, SP, Pre, F1, and smaller standard deviation. EHBCS algorithm is more suitable for the feature selection of high-dimensional datasets.

It should be emphasized that the purpose of feature selection is to reduce irrelevant or weakly correlated features as much as possible on the premise of ensuring classification accuracy. However, the number of feature subsets cannot be reduced indefinitely. Too few feature subsets may lead to the loss of important features, thus affecting the classification accuracy of the datasets. Therefore, it is necessary to balance the relationship between classification accuracy and the number of feature subsets. In practical applications, evaluation function models should be set scientifically and reasonably to ensure the classification performance of feature subsets.

6. Conclusion

This paper proposes an Elite Hybrid Binary Cuckoo Search Algorithm that adopts feature weighting and elite strategy. The proposed EHBCS algorithm aims to optimize the feature selection task on binary label datasets. The experimental results show that EHBCS achieves a better classification performance. Besides, all statistical metrics (standard deviation (Std), sensitivity (SE), specificity (SP), precision (Pre), and -measure (1)) reveal markedly the EHBCS is superior to BGA and BPSO. However, the algorithm still has shortcomings, such as increased computational complexity.

Future work requires further modification of the proposed algorithm to make it suitable for feature selection of multiclass datasets and to evaluate the results using different datasets and classification models.

Data Availability

The data are available at the dataset site: http://archive.ics.uci.edu/mlhttp://portals.broadinstitute.org/cgi-bin/cancer/datasets.cgihttp://csse.szu.edu.cn/staff/zhuzx/Datasets.html.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Z. Q. Li, J. Q. Du, B. Nie, W. Xiong, C. Hung, and H. Li, “Summary of feature selection methods,” Computer Engineering and Application, vol. 55, no. 24, pp. 10–19, 2019. View at: Google Scholar
  2. H. Liu and R. Setiono, “A probabilistic approachto feature selection: a filter solution,” in 13th International Conference on Machine Learning (ICML'96), San Francisco, CA, USA, 1996. View at: Google Scholar
  3. H. Liu, M. C. Zhou, and Q. Liu, “An embedded feature selection method for imbalanced data classification,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 3, pp. 703–715, 2019. View at: Publisher Site | Google Scholar
  4. R. Kohavi and G. H. John, “Wrappers for feature subset selection,” Artificial Intelligence Journal, vol. 97, no. 1-2, pp. 273–324, 1997. View at: Publisher Site | Google Scholar
  5. L. Y. Chuang, S. W. Tsai, and C. H. Yang, “Improved binary particle swarm optimization using catfish effect for feature selection,” Expert Systems with Applications, vol. 38, no. 10, pp. 12699–12707, 2011. View at: Publisher Site | Google Scholar
  6. W. H. Bangyal, J. Ahmad, I. Shafi, and Q. Abbas, “A forward only counter propagation network-based approach for contraceptive method choice classification task,” Journal of Experimental and Theoretical Artificial Intelligence, vol. 24, no. 2, pp. 211–218, 2012. View at: Publisher Site | Google Scholar
  7. Q. F. Zhao and Y. L. Zhang, “Ensemble method of feature selection and reverse construction of gene logical network based on information entropy,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 34, no. 2, article 2059004, 2020. View at: Publisher Site | Google Scholar
  8. J. X. Liu and Y. L. Zhang, “An attribute-weighted Bayes classifier based on asymmetric correlation coefficient,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 34, no. 10, article 2050025, 2020. View at: Publisher Site | Google Scholar
  9. Y. L. Zhang, T. Feng, S. Wang et al., “A novel XGBoost method to identify cancer tissue-of-origin based on copy number variations,” Frontiers in genetics, vol. 11, pp. 585029–585029, 2020. View at: Publisher Site | Google Scholar
  10. D. Rodrigues, L. A. M. Pereira, T. N. S. Almeida et al., “BCS: a binary cuckoo search algorithm for feature selection,” in 2013 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 465–468, Beijing, 2013. View at: Google Scholar
  11. J. H. Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence, MIT Press, Cambridge, MA, USA, 1992.
  12. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in IProceedings of ICNN'95-International Conference on Neural Networks, vol. 4, pp. 1942–1948, Perth, WA, Australia, 1995. View at: Publisher Site | Google Scholar
  13. W. H. Bangyal, J. Ahmad, and H. T. Rauf, “Optimization of neural network using improved bat algorithm for data classification,” Journal of Medical Imaging and Health Informatics, vol. 9, no. 1, pp. 669–680, 2019. View at: Google Scholar
  14. M. Junaid, W. H. Bangyal, and J. Ahmad, “A novel Bat Algorithm using sobol sequence for the initialization of population,” in 2020 IEEE 23rd International Multitopic Conference (INMIC), pp. 1–6, Bahawalpur, Pakistan, 2020. View at: Publisher Site | Google Scholar
  15. X. S. Yang and S. Deb, “Cuckoo search via Levy flights,” in 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), pp. 210–214, Coimbatore, 2009. View at: Publisher Site | Google Scholar
  16. Y. Liu, L. Xueming, W. Zhang, J. Peng, X. Liao, and W. U. Zhongfu, “Feature subset selection based on genetic algorithm,” Computer Engineering, vol. 29, no. 6, pp. 19-20, 2003. View at: Google Scholar
  17. W. Siedlecki and J. Sklansky, “A note on genetic algorithms for large-scale feature selection,” Pattern Recognition Letters, vol. 10, no. 5, pp. 335–347, 1989. View at: Publisher Site | Google Scholar
  18. J. Kennedy and R. C. Eberhart, “A discrete binary version of the particle swarm algorithm,” in 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, vol. 5, pp. 4104–4108, Orlando, FL, USA, 1997. View at: Publisher Site | Google Scholar
  19. H. A. Firpi and E. Goodman, “Swarmed feature selection,” International Symposium on Information Theory, vol. 1, pp. 112–118, 2004. View at: Google Scholar
  20. X. S. Yang and S. Deb, “Cuckoo search: recent advances and applications,” Neural Computing and Applications, vol. 24, no. 1, article 1367, pp. 169–174, 2014. View at: Publisher Site | Google Scholar
  21. P. Civicioglu and E. Besdok, “A conceptual comparison of the cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms,” Artificial Intelligence Review, vol. 39, no. 4, article 9276, pp. 315–346, 2013. View at: Publisher Site | Google Scholar
  22. E. Valian, S. Mohanna, and S. Tavakoli, “Improved cuckoo search algorithm for feed forward neural network training,” International Journal of Artificial Intelligence & Applications, vol. 2, no. 3, pp. 36–43, 2011. View at: Publisher Site | Google Scholar
  23. R. A. Vazquez, “Training spiking neural models using cuckoo search algorithm,” in 2011 IEEE Congress of Evolutionary Computation (CEC), pp. 679–686, New Orleans, LA, 2011. View at: Publisher Site | Google Scholar
  24. O. Pauline, “Adaptive cuckoo search algorithm for unconstrained optimization,” The Scientific World Journal, vol. 2014, Article ID 943403, 8 pages, 2014. View at: Publisher Site | Google Scholar
  25. D. FENG, Q. RUAN, and L. du, “Binary cuckoo search algorithm,” Journal of Computer Applications, vol. 33, no. 6, pp. 1566–1570, 2013. View at: Publisher Site | Google Scholar
  26. A. Gherboudj, A. Layeb, and S. Chikhi, “Solving 0-1 knapsack problems by a discrete binary version of cuckoo search algorithm,” International Journal of Bio Inspired Computation, vol. 4, no. 4, pp. 229–236, 2012. View at: Publisher Site | Google Scholar
  27. L. A. M. Pereira, D. Rodrigues, T. N. S. Almeida et al., “A Binary Cuckoo Search and Its Application for Feature Selection,” in Cuckoo Search and Firefly Algorithm, vol. 516 of Studies in Computational Intelligence, pp. 141–154, Springer International Publishing, 2014. View at: Publisher Site | Google Scholar
  28. K. K. Bhattacharjee and S. P. Sarmah, “A binary cuckoo search algorithm for knapsack problems,” in 2015 International Conference on Industrial Engineering and Operations Management (IEOM), pp. 1–5, Dubai, United Arab Emirates (UAE), 2015. View at: Publisher Site | Google Scholar
  29. M. N. Sudha and S. Selvarajan, “Feature selection based on enhanced cuckoo search for breast cancer classification in mammogram image,” Circuits and Systems, vol. 7, no. 4, pp. 327–338, 2016. View at: Publisher Site | Google Scholar
  30. M. A. E. Aziz and A. E. Hassanien, “Modified cuckoo search algorithm with rough sets for feature selection,” Neural Computing and Applications, vol. 29, no. 4, article 2473, pp. 925–934, 2018. View at: Publisher Site | Google Scholar
  31. S. Salesi and G. Cosma, “A novel extended binary cuckoo search algorithm for feature selection,” in 2017 2nd International Conference on Knowledge Engineering and Applications (ICKEA), pp. 6–12, London, 2017. View at: Publisher Site | Google Scholar
  32. K. Kira and L. A. Rendell, “A practical approach to feature selection,” in Proceedings of the Ninth International Workshop on Machine Learning (ML 1992), pp. 249–256, Aberdeen, Scotland, UK, 1992. View at: Google Scholar
  33. M. Lichman, “UCI machine learning repository,” 2020, http://archive.ics.uci.edu/ml/. View at: Google Scholar
  34. Medulloblastomas, “Cancer program datasets,” 2020, http://portals.broadinstitute.org/cgi-bin/cancer/datasets.cgi/. View at: Google Scholar
  35. High-dimentional datasets, “Microarray Datasets in Weka ARFF format,” 2020, http://csse.szu.edu.cn/staff/zhuzx/Datasets.html/. View at: Google Scholar
  36. S. Yadav and S. Shukla, “Analysis of k-fold cross-validation over hold-out validation on colossal datasets for quality classification,” in 2016 IEEE 6th International Conference on Advanced Computing (IACC), pp. 78–83, Bhimavaram, 2016. View at: Publisher Site | Google Scholar
  37. G. Magna, P. Casti, S. V. Jayaraman et al., “Identification of mammography anomalies for breast cancer detection by an ensemble of classification models based on artificial immune system,” Knowledge Based Systems, vol. 101, pp. 60–70, 2016. View at: Publisher Site | Google Scholar
  38. B. E. Boser, I. M. Guyon, and V. N. Vapnik, “A training algorithm for optimal margin classifiers,” in ACM Fifth Annual Workshop on Computational Lerning Theory, pp. 144–152, Pittsburgh, 1992. View at: Google Scholar

Copyright © 2021 Maoxian Zhao and Yue Qin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views300
Downloads606
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.