Abstract

In the era of big data, feature selection is an essential process in machine learning. Although the class imbalance problem has recently attracted a great deal of attention, little effort has been undertaken to develop feature selection techniques. In addition, most applications involving feature selection focus on classification accuracy but not cost, although costs are important. To cope with imbalance problems, we developed a cost-sensitive feature selection algorithm that adds the cost-based evaluation function of a filter feature selection using a chaos genetic algorithm, referred to as CSFSG. The evaluation function considers both feature-acquiring costs (test costs) and misclassification costs in the field of network security, thereby weakening the influence of many instances from the majority of classes in large-scale datasets. The CSFSG algorithm reduces the total cost of feature selection and trades off both factors. The behavior of the CSFSG algorithm is tested on a large-scale dataset of network security, using two kinds of classifiers: C4.5 and -nearest neighbor (KNN). The results of the experimental research show that the approach is efficient and able to effectively improve classification accuracy and to decrease classification time. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.

1. Introduction

The class imbalance problem is found in various scientific and social arenas, such as fraud/intrusion detection, spam detection, risk management, technical diagnostics/monitoring, financial engineering, and medical diagnostics [14]. In most applications, it is more important to correctly classify the minority class compared to the majority class although the minority class is much smaller in number than the majority class.

There are essentially two methods to address the class imbalance problem: sampling methods and cost-sensitive learning methods [1]. The objective of sampling methods and synthetic data generation is to provide a relatively balanced distribution from oversampling and/or undersampling techniques [5]. A very popular oversampling approach is the Synthetic Minority Oversampling Technique (SMOTE), which produces synthetic minority class samples, as opposed to oversampling with replacement [6]. For high-dimensional data, Blagus and Lusa showed that SMOTE does not change the class-specific mean values, and it decreases data variability, introducing correlation between samples [7]. Cost-sensitive learning methods introduce a cost matrix to minimize total costs while maximizing accuracy [8]. When learning from imbalanced data, most classifiers are overwhelmed by most class samples, so the false negative rate is always high [9]. Researchers have introduced many methods to address these problems, including combining sampling techniques with cost-sensitive learning, setting the cost ratio by inverting prior class distributions, and collecting the cost of features before classification [5, 8, 9].

Most data mining techniques are not designed to cope with large numbers of features, and such is the case with feature selection. Currently, the class imbalance problem is severe when data dimensionality is high. Of the many methods that exploit feature selection, the most common are those that address only relevant features, and these methods are also the most efficient and effective, which is widely known as the curse of dimensionality [10]. Many studies have realized the importance of feature selection and addressed the subject from various perspectives; this approach has been used increasingly in class imbalance problems.

In this paper, we investigate cost-sensitive feature selection issues in an imbalanced scenario. Specifically, before briefly introducing cost-sensitive learning and its application to feature selection, we illustrate the imbalanced problem, which is the most relevant topic of study in the current research. Then, we propose a new method for feature selection whose goal is to develop an efficient approach in the field of network security, an arena in which large numbers of imbalanced datasets are typical. Thus, rather than improving on previous methods, our purpose is to match the performance of previous cost-sensitive feature selection approaches using a method that addresses very large datasets with imbalance problems.

2.1. Cost-Sensitive Learning

Different costs are associated with different misclassification errors in real world applications [11]. Cost-sensitive learning takes into account the variable cost of misclassifying different classes [12, 13]. In most cases, cost-sensitive learning algorithms are designed to minimize total costs while introducing multiple costs. In 2000, Turney presented the main types of costs involved in inductive concept learning [14].

Cost-sensitive learning has two major types of costs: misclassification and test costs [11]. Misclassification costs can be viewed as the penalties that result from incorrectly classifying an object within a certain class. Traditional machine learning methods are in large part aimed at minimizing the error rate and are dedicated to uniform error costs. They assume equal misclassification costs and relatively balanced class distributions. Typically, the cost of misclassifying an abnormal incident as a normal incident is much higher than the cost of misclassifying a normal incident as an abnormal incident. Thus, misclassification costs must be minimized rather than misclassification errors. There are two types of misclassification costs: example-dependent costs and class-dependent costs [11].

Test costs typically refer to money, time, computing, or other resources that are expended to obtain data items related to an object [8]. There are numerous types of measurement methods with different test costs; higher test costs are required to obtain data characterized by smaller measurement error. An appropriate measurement should be selected, and the total test cost should be minimized.

Some studies focus on misclassification costs but fail to consider the cost of the test [15]. Others consider test costs but not misclassification costs [16]. However, because Turney first considered both test and misclassification costs, his approach has gradually become one of the foremost trends in the research.

2.2. Cost-Sensitive Feature Selection

In general, classification time increases with the number of features based on the computational complexity of the classification algorithm. However, it is possible to alleviate the curse of dimensionality by reducing the number of features, although this may weaken discriminating power.

A classifier can be understood as a specific function that maps a feature vector onto a class label [17]. An algorithm that can guide the active acquisition of information and balance the costs is often termed a cost-sensitive classifier [18]. The acquisition cost for selected features is an important issue in some applications, and more researchers have taken the feature acquisition cost into account in the feature selection criterion [19]. Ji and Carin introduced many cost-sensitive feature selection criteria, while traditional methods select all the useful features simultaneously by setting the weights on the redundant features to zero [17].

Several works have addressed cost-sensitive feature selection in recent years. For example, Bosin et al. [20] present a cost-sensitive approach feature selection that focuses on the quality of features and the cost of computation. In spite of the complexity, this method is able to increase classifier accuracy and judges the goodness of a subset of features with a particular classification model by defining a feature selection criterion.

Mejía-Lavalle [21] proposes a feature selection heuristic that takes into account a cost-sensitive classification. Unlike most feature selection studies that evaluate only accuracy and processing time, this heuristic evaluates different feature selection-ranking methods over large datasets. In addition, they can separate relevant and irrelevant features by stressing the issue around the boundary.

Wang et al. [16] address the issue of data overfitting by designing three simple and efficient strategies—feature selection, smoothing, and threshold pruning—against the test cost-sensitive decision tree method. Before applying the test cost-sensitive decision tree algorithm, they use a feature selection that considers test costs and misclassification costs to preprocess the dataset.

In study by Lee et al. [22], a spam detection model is proposed, the first to take into account the importance of feature variables and the optimal number of features. The optimal number of selected features is decided using two methods: the use of one parameter optimization during the overall feature selection and parameter optimization in every feature elimination phase.

Chang et al. [23] propose an efficient hierarchical classification scheme and cost-based group-feature selection criterion to improve feature calculation and classification accuracy. This approach adopts computational cost-sensitive group-feature selection criterion with the Sequential Forward Selection (SFS) to obtain the class-specific quasioptimal feature set.

Zhao et al. [24] define a new cost-sensitive feature selection problem on a covering-based rough set model with normal distribution measurement errors. Unlike existing models, it proposes backtracking and heuristic algorithms mainly on the error boundaries with test costs and misclassification costs.

Liu et al. [25] propose a new cost-sensitive learning method for software defect prediction, which is divided into two stages: utilizing cost information in the classification stage as well as the feature selection stage.

However, few studies have focused on the class imbalance problem in view of cost-sensitive feature selection. To the best of our knowledge, no study addresses cost-sensitive feature selection in the security network field because of the significant domain differences and dependences.

3. Cost-Sensitive Feature Selection Model

3.1. Problem Formulation

Here, we present the common notations and an intrusion detection event that was taken from the KDD CUP’99 dataset [26] to illustrate them.

Let the original feature set be , where is the feature dimension count. The feature selection problem is to find a subset such that should maximize some scoring function; simultaneously, is an optimal subset that gives the best possible classification accuracy.

Assume that, in the instance space, we have a set of samples , (; ), where is the number of samples. Let denote the th feature of sample . The labeled instance space, which is also called universal instance space , is defined as a Cartesian product of the input instance space and the target feature domain; that is, . The training is denoted as , where contains classes, , and . The notation represents a classifier that was trained using inducer on the training dataset .

The cost-sensitive feature selection problem is also called the feature selection with minimal average total cost problem [24]. In this paper, we focus on cost-sensitive feature selection based on both misclassification costs and test costs. Unlike the generic algorithm of feature selection, we use it to achieve accuracy or to reduce measurement error. Another purpose of feature selection in our study is to minimize average cost by considering the trade-off between test costs and misclassification costs [8]. In other words, our optimization objective is to minimize average total cost.

Let MC be the misclassification cost matrix and let TC be the test cost matrix. The average total cost should be the following:

3.2. Cost Information

In the real world, there are many types of costs associated with a potential instance, such as the cost of additional tests, the cost associated with expert analysis, and intervention costs [11]. Different applications are usually associated with various costs involving misclassification and test costs [19].

Without loss of generality, let be the cost of predicting an instance of class as class . When addressing imbalance problems, misclassification costs can be categorized into four types: () false positive (FP), notation , is the cost of misclassifying an instance of a positive class as a negative class; () false negative (FN), notation , is the cost of the opposite case; () the misclassification costs of true positive (TP) are equal to true negative (TN), that is, zero. Typically, it is more important to recognize positive rather than negative instances .

The cost-sensitive classification problem can be constructed as a decision theory problem using Bayesian Decision Theory [27, 28]. We assume that the probability is defined by a subset of features in instance , while the remaining features are irrelevant or redundant. The optimal prediction, for example, , is the class that minimizes the expected loss [27]:

Although decision tree building does not need to be cost-sensitive to measure feature selection, an algorithm requires the cost-sensitive property to rank or weight features based on their importance [28]. Feature selection could confirm or expand domain knowledge by using such ranking.

Borrowing ideas from credit card and cellular phone fraud detection in related fields, the Lee research group identifies three major cost factors related to intrusion detection: damage cost (DCost), response cost (RCost), and operational cost (OpCost) [29]. The misclassification cost can be identified by the following formula: where is the function of the progress and effect of the attack. For example, from Table 1 we can see : misclassification cost () = RCost(PROBE) + DCost(DOS).

The expected misclassification cost for a new example drawn at random from distribution is as follows:

Based on study by Lee et al. [29], which extracts and constructs predictive features from network audit data, this approach divides features into four relative levels based on their computational costs; see Table 2. While the OpCost is the cost of time and computing resources spent on extracting or analyzing features for processing the stream of events [29], we assume that the acquisition cost of features can be associated with each feature level.

We assume that both misclassification and test costs are given in the same cost scale. Therefore, summing together the two costs to obtain the average total cost is feasible.

3.3. Cost-Sensitive Fitness Function

Unlike the traditional feature selection algorithm, whose purpose is to improve classification accuracy or reduce measurement error, this paper attempts to minimize total costs and make trade-offs between costs and classification accuracy. The final objective of the feature selection problem is to select a feature subset with minimum size, total average costs, and classification accuracy.

Let feature subset have features. Then, we calculate the average total cost of the dataset and reconstruct, test, and train datasets to obtain recognition rate by the nearest neighbor method for each group of subsets selected. Given the number of features of candidate , construct the feature fitness function using nearest neighbor for each feature: where is a parameter for balancing the number of features and the weight of recognition rate . is a parameter introduced to weight the influence of the cost in the fitness function. Here, we see that the fewer the feature candidates that are selected, the greater the recognition rate and the fitness function .

3.4. Chaos Optimization and Genetic Algorithm

Genetic algorithm (GA) is a popular parallelized method because of its powerful quality of global search, which is widely used in feature selection [30]. Weiss et al. [8] proposed a cost-sensitive feature selection using histograms based on genetic search. Chen et al. [30] combined chaos optimization with a GA to choose a subset of available features by eliminating unnecessary features from the classification task. Shen and Gao [31] proposed feature selection based on a chaos search to improve the classification accuracy of the weld detect. According to the rules of this approach, chaotic movement can cover all states in a certain range without repetition; the chaotic optimization algorithm (COA) vastly improved the low search efficiency that characterizes the late evolving period of GA [32, 33].

Most studies introduce a logistics map [31]: , where is a control parameter smaller than 4, is the th chaotic variable, and denotes the number of iterations. When and is distributed in the range , it can be a deterministic dynamic system that is in a complete state of chaos. However, the numbers of sequence distribution boundaries generated by logistic map are more than can be satisfied with the unknown distribution problem, which requires a high level of evenness of individual distribution to avoid the asymmetry of the initial population in the GA.

By comparing ten one-dimensional chaotic maps in terms of the convergence rate, algorithm speed, and accuracy, Tavazoei and Haeri found that no single map has the best global optimization ability [34]. In this paper, Tent map was applied as it has the maximum convergence rate, and the mathematical expression can be defined by [35]When , the result is the well-known Center Tent map, and the expression is shown byWith the Bernoulli shift transformation, the mathematical expression can be written as

As the Tent map requires the maximum computational time, which seriously affects the algorithm speed, we improved it by deploying the random equation based on [35]. The chaos expression is shown byAs a consequence, the Tent map was able to achieve global chaos optimization more efficiently by reaching into the chaotic state at a small cycle point.

3.5. Cost-Sensitive Feature Selection Model Using Chaos Genetic Algorithm

In this section, we propose a cost-sensitive feature selection model that uses a new cost-sensitive fitness function and chaos GA to solve the class imbalance problem. The algorithm follows the filter approach, which is not associated with a particular classifier. Finding a minimal optimal cost feature subset is NP-hard, particularly in those situations with an imbalanced dataset. However, it is important to combine the feature selection procedure with the learning model [8]. Therefore, the proposed algorithm, the CSFSG algorithm, employs a chaos GA as a search method to address this problem.

Our algorithm consists of four main steps (see Figure 1).

Step 1 (preprocessing). Convert the discrete numeric attribute and normalization values to the range in accordance with the cost-sensitive heuristic rule and calculate the misclassification cost matrix and test cost.

Step 2. Generate the initial population using the Tent map and encode. Provide the probability of crossover , probability of mutation , the population size, and number of generations.

Step 3. Calculate the fitness value of each individual and select the optimum population based on the cost-sensitive fitness function.

Step 4. Apply GA to the population, search the candidate feature subset by using the chaos optimize algorithm, and update the current population.

4. Results of the Experimental Investigation

We conducted a series of experiments to compare the overall performance of our approach with some existing algorithms. In this section, we introduce the implementation setting and the evaluation measurements used in our experiments. Then, we describe the relative comparison experiments. Finally, we discuss the results.

4.1. Datasets and Implementation Setting

Our experiments were implemented in the Weka framework with Java, which is available at http://www.cs.waikato.ac.nz/ml/weka/index.html. The datasets are from the public KDD CUP’99 datasets, which have significant imbalances between class 41 features and large-scale instances [36]. Because our research target is imbalanced datasets, we preprocess the data from the dataset by volatility differentiating numerical attributes and normalization values within the range .

The raw dataset comprises approximately 4 GB and 5 million instances, which are divided into two classes—labeled and unlabeled—of three types: continuous, discrete, and string. In this study, we use the ten percent version, which consists of 494,021 connections and 24 types of attacks with 5 classes (Normal, DOS, U2R, R2L, and PROBE). The four main types of attack are DOS (Denial of Service Attack, e.g., land attack), U2R (User to Root Attack, e.g., rootkit attack), R2L (Remote to Local Attack, e.g., guess password attack), and probing (information about the target is gathered, e.g., nmap attack). The detailed characteristics of these datasets are shown in Table 3.

Our feature selection, which is a filter approach, does not depend on classifiers. We choose two types of popular classifiers for the classification assignment: -nearest neighbors and decision trees. Their corresponding implementations are KNN and C4.5. The latter is also used as a base classifier with Laplace smoothing, as it can be transformed into cost-sensitive decision tree classifiers in many related studies.

For the sake of comparison, we designed two group experiments (feature selection and classification), one that uses our cost-sensitive feature selection method and one that does not. In the feature selection stage, the average total cost is applied (the sum of the average test and misclassification costs) to validate the effectiveness of our proposed method. In the classification stage, weighted accuracy is applied, which substitutes accuracy and is more suitable for imbalanced datasets. Moreover, we use tenfold cross-validation to evaluate the performance of the classifier as well as comparing the execution time of each classifier with and without feature selection.

The algorithm we proposed in this paper focused on the cost-sensitive fitness function but not parameter optimization of the GA. Thus the parameter of GA itself is not discussed in this paper. Based on the majority of configurations of the GAs, the parameters were configured to be nearly the same as those in Weiss et al. [8]. Details are as follows: the population size is 20, the number of generations is 20, the probability of crossover is 0.6, the probability of mutation is 0.033, and report frequency is 20.

Bolón-Canedo et al. had studied and evaluated the behavior of the methods under the influence of parameter [37]. Increasing means that the correlation between features has given more weight to cost; that is to say, the smaller the higher the total cost and the lower the error. Also the errors can be obtained by using a Kruskal-Wallis statistical test which can help us to choose the value of the parameter [37]. The value of is investigated from 0 to 0.5 with step 0.1 by using the total cost for evaluating. Here we chose . The common classifiers exhibit bias evaluation and result in majority classes; however, people pay great attention to the classification accuracy of minority classes as well as the whole. The confusion matrix shown in Table 4 is used to represent the contingency table for imbalance problems [8].

We chose precision, recall, and -measure and ROC area to evaluate the classifiers as follows: where (usually ) is coefficient, used to adjust the relative importance of precision versus the recall [38, 39]. Among them, recall and -measure are the main criteria aimed at the minority class. -measure is an effective measurement for an imbalance problem that is a combination of recall and precision. The value of -measure will be higher only when both the values of recall and precision are higher [39].

4.2. Experimental Design and Results

For the sake of comparison, we designed two stages of experiments. In the first feature selection stage, we compared the effect of our cost-sensitive feature selection method with traditional ones on an imbalanced dataset. In the second classification stage, we compared the precision of two types of classifiers (KNN, C4.5) using CSFSG via 10-fold cross-validation in the imbalanced environment.

Because there were multiple datasets with deficient class numbers and evaluation metrics, we designed many combinations of comparison experiments for our proposed feature selection method using two feature selection algorithms: correlation-based feature selection (CFS) [40] and CASH [8]. The CFS used here was not sensitive and was a multivariate CFS that yields better results than the wrapper CFS for small datasets, while CASH is sensitive to both test and misclassification costs using histograms. Also, the CASH has been proved to be superior to several other cost-sensitive algorithms [8].

Table 5 presents the dataset, which used cost-sensitive feature selection methods with more features but did not take the most time to build the classification models.

As seen in Figures 2 and 3, cost-sensitive feature selection methods exhibited high performance with regard to the evaluation measures, but our method was superior to the CASH, especially with respect to classifying the minority class (the class of R2L and U2R). CSFSG helped to reduce the number of features used to distinguish some types of attack and increase the efficiency of the classifiers. The feature selection stage is important and effective because it can save considerable time with an increasing number of instances, particularly for large-scale imbalance problems.

In the classification stage, 10-fold cross-validation is applied on the datasets. The results of classification with feature selection are shown in Table 6. Here we use -measure, recall, and ROC area to compare the classification model. In the table, we could get a general idea that feature selection methods help to find out the anomaly attacks from the known attacks with classification model, despite the few outliers.

Although the result in the first two columns (normal and DOS) does not show obvious differences, the result derived from the cost-sensitive feature selection in the last three columns presents higher values of -measure, recall, and ROC area. Comparing with other cost-sensitive feature selection algorithms, our proposed method has better performance, especially when applied to minority class.

In addition, we can obtain better classification performance with CSFSG than with CASH in the time given, which is one of the most important factors in network security.

In conclusion, our cost-sensitive feature selection method does not save a considerable amount of time; however, it does facilitate the detection of the minority and CSFSG class, particularly under a class imbalance environment.

5. Conclusions

In response to the rapid growth of big data, this study presents a novel cost-sensitive feature selection method using a chaotic genetic search for imbalanced datasets. We introduce cost-sensitive learning into the feature selection method, considering both the misclassification cost and test cost with respect to the field of network security.

It can be seen from the experimental results that cost-sensitive feature selection using chaotic genetic search efficiently reduces complexity in the feature selection stage. Meanwhile, it can improve classification accuracy and decrease classification time.

Several future works will address problems with large numbers of features. Furthermore, future research will focus on the application of the proposed method to other fields, such as medicine or biology.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

This study is supported by the project supported by the National Science Foundation for Young Scientists of China (Grant no. 61401300), the Outstanding Graduate Student Innovation Projects of Shanxi Province (no. 20123030), and the Scientific Research Project of Shanxi Provincial Health Department (no. 201301006).