Abstract

Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.

1. Introduction

Feature selection [13] is an essential process in data mining applications. The main aim of feature selection is to reduce the dimensionality of the feature space and to improve the predictive accuracy of a classification algorithm [4, 5]. In many domains, the misclassification costs [69] and the test costs [10, 11] must be considered in the feature selection process. Cost-sensitive feature selection [1214] focuses on selecting a feature subset with a minimal total cost as well as preserving a particular property of the decision system [15, 16].

Test costs and misclassification costs are two most important types of cost in cost-sensitive learning [17]. The test cost is money, time, or other resources we pay for collecting a data item of an object [18, 19]. The misclassification cost is the penalty we receive while deciding that an object belongs to class when its real class is [6, 8]. Some works have considered only misclassification costs [20], or only test costs [2123]. However, in many applications, it is important to consider both types of costs together.

Recently, the cost-sensitive feature selection problem for nominal datasets was proposed [17]. A backtracking algorithm has been presented to address this problem. However, this algorithm has been applied to only small datasets and addressed on only nominal data. In real applications, the data can be acquired from measurements with different errors. The measurement errors of the data have certain universality.

In this paper, we propose the cost-sensitive feature selection problem of numerical data with measurement errors and deal with it through considering the trade-off between test costs and misclassification costs. The major contributions of this paper are fourfold. First, based on normal distribution measurement errors, we build a new data model to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models [17] mainly on the error boundaries. Second, we construct a computational model of the covering-based rough set with normal distribution measurement errors. In fact, normal distribution [24, 25] is found to be applicable over almost the whole of science and engineering measurement. With this model, coverings are constructed from data rather than assigned by users. Third, the cost-sensitive feature selection problem is defined on this new model of covering-based rough set. It is more realistic than the existing feature selection problems. Fourth, a backtracking algorithm is proposed to find an optimal feature subset for small datasets. However, for large dataset, finding a minimal cost feature subset is NP-hard. Consequently, we propose a heuristic algorithm to deal with this problem.

Six open datasets from the University of California-Irvine (UCI) library are employed to study the performance and effectiveness of our algorithms. Experiments are undertaken with open source software cost-sensitive rough sets (Coser) [26]. Experimental results show that the pruning techniques of the backtracking algorithm reduce searching operations by several orders of magnitudes. In addition, the heuristic algorithm can provide efficient solution to find an optimal feature subset in most cases. Even if the feature subset is not optimal, it is still acceptable from a statistical point of view.

The rest of the paper is organized as follows. Section 2 presents data models with test costs and misclassification costs as well as measurement errors. Section 3 describes the computational model, namely, covering-based rough set model with measurement errors. The feature selection with the minimal cost problem on the new model is also defined in this section. Then, Section 4 presents a backtracking algorithm and a heuristic algorithm to address this feature selection problem. In Section 5, we discuss the experimental settings and results. Finally, Section 6 concludes and suggests further research trends.

2. Data Models

Data models are presented in this section. First, we start from basic decision systems. Then, we introduce normal distribution errors to test and propose a decision system with measurement errors. Finally, we introduce a decision system based on measurement errors with test costs and misclassification costs.

2.1. Decision Systems

Decision systems are fundamental in data mining and machine learning. For completeness, a decision system is defined below.

Definition 1 (see [27]). A decision system (DS) is the 5-tuple: where is a universal set of objects, is a nonempty set of conditional attributes, and is the decision attribute. For each , . The set is the value set of attribute , and is the information function for each attribute .

In order to facilitate processing and comparison, the values of conditional attributes are normalized from their value into a range from 0 to 1. In fact, there are a number of normalization approaches. For simplicity, we employ the linear function , where is the initial value, is the normalized value, and max and min are the maximal and minimal values of the attribute domain, respectively.

Table 1 is a decision system of Bupa liver disorder (Liver for short), in which conditional attributes are normalized values. Here, {Mcv, Alkphos, Sgpt, Sgot, Gammagt, Drinks}, , and .

Liver contains 7 attributes. The first 5 attributes are all blood tests which are thought to be sensitive to liver disorders that might arise from excessive alcohol consumption. The sixth attribute is the number of alcoholic drinks per day. Each line in Liver constitutes the record of a single male individual. The Selector attribute is used to split data into two sets.

2.2. A Decision System with Measurement Errors

In real applications, datasets often contain many continuous (or numerical) attributes. There are a number of measurement methods with different test costs to obtain a numerical data item. Generally, higher test cost is required to obtain data with smaller measurement error [28]. The measurement errors often satisfy normal distribution which is found to be applicable over almost the whole of science and engineering measurement. We include normal distribution measurement errors in our model to expand the application scope.

Definition 2 (see [28]). A decision system with measurement errors (MEDS) is the 6-tuple: where , and have the same meanings as in Definition 1, is the maximal measurement error function, and is the error boundary of attribute .

Given , the error boundary of attribute is given by where the regulator factor can adjust the error boundary.

In applications, one can deal with the abnormal value of measurement error according to the Pauta criterion of measurement error theory, which is used to determine the abnormal values. That is, if the repeated measurement data satisfy , , the would be considered as an abnormal value and be rejected, where is the standard deviation, and is the mean of all measurement values.

Recently, the concept of neighborhood (see, e.g., [29, 30]) has been applied to define different types of covering-based rough set [3134]. A neighborhood based on static error range is defined [35]. Although showing similarities, it is essentially different from ours. The proposed neighborhood is considered as the distribution of the data error and the confidence interval. The neighborhood boundaries for different attributes of the same database are completely different. An example of neighborhood boundary vector is listed in Table 2.

2.3. A Decision System Based on Measurement Errors with Test Costs and Misclassification Costs

In many applications, the test cost must be taken into account [5]. Test cost is the money, time, or other resources that we pay for collecting a data item of an object [8, 9, 18, 19, 36]. In addition to the test costs, it is also necessary to consider misclassification costs. A decision cannot be made if the misclassification costs are unreasonable [5]. More recently, researchers have begun to consider both test costs and misclassification costs [8, 13, 17].

Now, we take into account both test and misclassification costs as well as normal distribution measurement errors. We have defined this decision system in [37] as follows.

Definition 3. A decision system based on measurement errors with test costs and misclassification costs (MEDS-TM) is the 8-tuple: where , and have the same meanings as Definition 2, is the test cost function, and is the misclassification cost function, where .

Here, we consider only the sequence-independent test-cost-sensitive decision system. There are a number of test-cost-sensitive decision systems. A hierarchy of decision systems consisting of six models was proposed [18]. For any , the test cost function is given by .

The test cost function can be stored in a vector. An example of text cost vector is listed in Table 3.

The misclassification cost [3840] is the penalty that we receive while deciding that an object belongs to class when its real class is [8]. The misclassification cost function is defined as follows:(1) is the misclassification cost function, which can be represented by a matrix , where ,(2) is the cost of misclassifying an example from “class " to “class ",(3).

The following example gives us an intuitive understanding of the decision system based on measurement errors with test costs and misclassification costs.

Example 4. Table 1 is a Liver decision system. Tables 2 and 3 are error boundary vector and test cost vector of Liver decision system, respectively. consider That is, the test costs of Mcv, Alkphos, Sgpt, Sgot, Gammagt, and Drinks are $26, $17, $34, $45, $38, and $5, respectively. In Liver dataset, the Selector field is used to split data into two sets. Here, a false negative prediction (FN), that is, failing to detect liver disorders, may well have fatal consequences, whereas a false positive prediction (FP), that is, diagnosing liver disorders for a patient that does not actually have them, may be less serious [41]. Therefore, a higher penalty of $2000 is paid for FN prediction and $200 is paid for FP prediction.

Obviously, if and are not considered, the MEDS-TM degrades to a decision system with measurement errors (MEDS) (see, e.g., [28]). Therefore, the MEDS-TM is a generalization of the MEDS.

3. Covering-Based Rough Set with Measurement Errors

As a technique to deal with granularity in information systems, rough set theory was proposed by Pawlak [42]. Since then, we have witnessed a systematic, worldwide growth of interest in rough set theory [4352] and its applications [53, 54]. Recently, there has been growing interest in covering-based rough set. In this section, we introduce normal distribution measurement errors to covering-based rough set. The new model is called covering-based rough set with measurement errors. Then, we define a new cost-sensitive feature selection problem on this covering-based rough set.

3.1. Covering-Based Rough Set with Measurement Errors

The covering-based rough set with measurement errors is a natural extension of the classical rough set. If all attributes are error free, the covering-based rough set model degenerates to the classical one. With the definition of the MEDS, a new neighborhood is defined as follows.

Definition 5 (see [28]). Let be a decision system with measurement errors. Given and , the neighborhood of with reference to measurement errors on the feature set is defined as That means the value of the measurement error of attribute in . According to Definition 5, we know that the neighborhood is the intersection of multiple basic neighborhoods. Therefore, we obtain

Although showing similarities, the neighborhood defined in [35] is essentially different from ours in two ways. First, a fixed boundary of neighborhood is used for different datasets. In contrast, the boundaries of neighborhood in our model are computed according to the values of attributes. Then, the uniform distribution is considered in [35]. In contrast, we introduce the normal distribution to our model. As mentioned earlier, the normal distribution is found to be applicable over almost the whole of science measurement.

Normal distribution is a plausible distribution for measurement errors. In statistics, “3-sigma” rule states that over 99.73% (95.45%) of measurement data will fall within three (two) standard deviations of the mean [55]. We introduce this rule to our model and present a new neighborhood considering both the error distribution and the confidence interval. The proportion of small measurement errors is higher than large ones. Any value in the measurement that exceeds the three standard deviations from the mean should be discarded. Therefore, the measurement errors with no more than a difference of () should be viewed as a granule. In view of this, we introduce the relationship between the error boundary and the standard deviation in the following proposition.

Proposition 6. Let the error boundary and be the confidence level. one has about of cases within .

According to Proposition 6, we have about of cases within . If , we have about of cases within . According to Definition 5, every item belongs to its own neighborhood. This is formally given by the following theorem.

Theorem 7. Let be a decision system with measurement errors and . The set is a covering of .

Proof. Given for all , for all , , , .
Therefore, for all , , and for any , .
Hence, the set is a covering of . This completes the proof.

Now, we discuss the lower and upper approximations as well as the boundary region of rough set in the new model.

Definition 8 (see [28]). Let be a decision system with measurement errors and a neighborhood relation on , where . We call a neighborhood approximation space. For arbitrary , the lower approximation and the upper approximation of in are defined as The positive region of concerning is defined as [42, 56].

Definition 9. Let be a decision system with measurement errors, for all , . The boundary region of in is defined as

Generally, a covering is produced by a neighborhood boundary. The inconsistent object in a neighborhood is defined as follows.

Definition 10 (see [28]). Let be a decision system with measurement errors, , and . In the set of , for all is called an inconsistent object if . The set of inconsistent objects in is The number of inconsistent objects is denoted as .

Using a specific example, we explain the lower approximations, the upper approximations, the boundary regions, and the inconsistent objects of the neighborhood.

Example 11. A decision system with neighborhood boundaries is given in Tables 4 and 5. Table 4 is a subtable of Table 1. Let , , and , where = Mcv, = Alkphos, and = Sgpt. are listed in Table 6, where takes values listed as column headers, and takes values listed in each row. According to Definition 10, the inconsistent object in is .
In addition, is divided into a set of equivalence classes by . . Let and . and are listed in the first part and the second part of Table 7, respectively. Here, takes values listed as column headers, and takes values listed in each row.
The positive regions and the boundary regions of on different test sets can be computed from Table 7:(1), ,(2), ,(3), ,(4) has the same approximating power as .

3.2. Minimal Cost Feature Selection Problem

In this work, we focus on cost-sensitive feature selection based on test costs and misclassification costs. Unlike reduction problems, we do not require any particular property of the decision system to be preserved. The objective of feature selection is to minimize the average total cost through considering a trade-off between test costs and misclassification costs. Cost-sensitive feature selection problem is called the feature selection with minimal average total cost (FSMC) problem.

Problem 1. The FSMC problem:input: ,output: ,optimization objective: minimize the average total cost (ATC).

The FSMC problem is a generalization of classical minimal reduction problem. On the one hand, several factors should be considered such as the test costs and misclassification costs as well as normal distribution measurement errors. These factors are all intrinsic to data in real applications. On the other hand, the minimal average total cost is the optimization objective through considering the trade-off between the two kinds of costs. Compared with the accuracy, the average total cost is more general metric in data mining applications [36]. The following is a five-step process to compute the average total cost. Let be a selected feature set. Given for all , we compute the neighborhood space . Let and let be the decision value of object . Let and be the number of -class and -class, respectively, where . Let the misclassification cost and , respectively. In order to minimize the misclassification cost of the set , we assign one class for all objects in . Let be the minimal value of and . For any , the assigned class -class if and -class if , where is the cost of classifying an object of the -class to the -class. The decision value of the object depends on the value with the max number of . The misclassification cost of the object is . If and , . Conversely, if and . Therefore, we compute the average misclassification cost (AMC) as follows: The average total cost (ATC) is given by

The main aim of feature selection is to determine a minimal feature subset from a problem domain while retaining a suitably high accuracy in representing the original features [57]. In this context, rather than selecting a minimal feature subset, we choose a feature subset in order to minimize the average total cost. The minimal average total cost is given by

The following example gives an intuitive understanding.

Example 12. A decision system with neighborhood boundaries is given by Tables 4 and 5. Let , , and . Let and .

Step 1. is the neighborhood of , which is listed in Table 8. If , the value at th row and th column is set to 1; otherwise, it is set to 0.

Step 2. Since the set of , the , where . The set of has two kinds of classes, which should be adjusted to one class. Since , for any , ”. In the same way, in order to minimize the cost of , we adjust all classes of elements in to “”.

Step 3. We can obtain the new class of each test. We count the number of different classes of each test, which is listed in Table 9.

Step 4. From Table 9, we select with the maximal of as the class value of . The original decision attribute values and are listed in Table 10. From this Table, we know and . Therefore, the average misclassification cost .

Step 5. The average total cost is .

In order to search a minimal cost feature subset, we can define a problem to deal with this issue. Under the context of MEDS-TM, this problem will be called cost-sensitive feature selection problem or the minimal cost feature selection (FSMC) problem. Compared with the minimal test cost reduct (MTR) problem (see, e.g., [15, 16]), the FSMC problem should not only consider the test costs but also take the misclassification costs into account. When the misclassification costs are too large compared with test costs, the total test cost equals the total cost. In this case, the FSMC problem coincides with the MTR problem.

4. Algorithms

We propose the -weighted heuristic algorithm to address the minimal cost feature selection problem. In order to evaluate the performance of a heuristic algorithm, an exhaustive algorithm is also needed. Exhaustive searches are also known as backtracking algorithms which look for every possible way to search for an optimal result. In this section, we review our exhaustive algorithm and propose a heuristic algorithm for this new feature selection problem.

4.1. The Backtracking Feature Selection Algorithm

We have proposed an exhaustive algorithm in [37] that is based on the backtracking. The backtracking algorithm can reduce the search space significantly through three pruning techniques. The backtracking feature selection algorithm is illustrated in Algorithm 1. In order to invoke this backtracking algorithm, several global variables should be explicitly initialized as follows:(1) is a feature subset with minimal average total cost,(2) is currently minimal average total cost,(3)backtracking().

Input: , select tests , current level test index lower bound
Output:  A set of features with ATC and , they are global variables
Method:  backtracking
for ( ; ; + +)  do
  
    //Pruning for too expensive test cost
if then
  continue;
end if
   //Pruning for non-decreasing total cost and decreasing misclassification cost
if  (( and then
  continue;
end if
if then
   ; //Update the minimal total cost
   ; //Update the set of features with minimal total cost
end if
 backtracking ;
end for

A feature subset with the ATC will be stored in at the end of the algorithm execution. Generally, the search space of the feature selection algorithm is . In order to deal with this issue, there are a number of algorithms such as particle swarm optimization algorithms [58], genetic algorithms [1], and backtracking algorithms [59] in real applications.

In Algorithm 1, three pruning techniques are employed to reduce the search space in feature selection. Firstly, Line indicates that the variable starts from instead of 0. Whenever we move forward through the recursive procedure, the lower bound is increased. And then, the second pruning technique is shown in Lines 3 through 5. In the real applications, the misclassification costs are nonnegative. In this way, the feature subsets will be discarded if the test cost of is larger than the current minimal average total cost (). This technique can prune most branches. Finally, Lines 6 through 8 indicate that if the new feature subset produce a high cost along with decreasing misclassification cost, the current branch will never produce the feature subset with the minimal total cost.

4.2. The -Weighted Heuristic Feature Selection Algorithm

In order to deal with the minimal feature selection problem, we design the -weighted heuristic feature selection algorithm. The algorithm framework is listed in Algorithm 2 containing two main steps. First, the algorithm adds the current best feature to according to the heuristic function until becomes a superreduct. Then, delete the feature from guaranteeing with the current minimal total cost. In Algorithm 2, lines 5 and 7 contain the key code of the addition. Lines 10 to 14 show the steps of deletion.

Input:
Output:  A feature subset with minimal total cost
Method:
;
  //Addition
;
while do
for  each do
  Compute ;
end for
 Select with the maximal ;
; ;
   end while
   //Deletion
while do
for each do
  Compute ;
end for
 Select with the minimal ;
;
end while
return ;

According to Definition 10, the number of inconsistent objects in neighborhood is useful in evaluating the quality of a neighborhood block. Now, we introduce the following concepts.

Definition 13 (see [35]). Let be a decision system with measurement errors, , and . The total number of such objects with respect to is and the positive region is

According to Definition 13, we know that is a superreduct if and only if . Now, we propose the -weighted heuristic information function: where is the test cost of the attribute , and is a user-specified parameter. In this heuristic information function, the attributes with lower cost have bigger significance. We can adjust the significance of test cost through different settings. If , test costs are essentially not considered.

5. Experiments

In this section, we try to answer the following questions by experimentation. The first two questions concern the backtracking algorithm, and the others concern the heuristic algorithm.(1)Is the backtracking algorithm efficient?(2)Is the heuristic algorithm appropriate for the minimal cost feature selection problem?(3)How does the minimal total cost change for different misclassification cost settings?

5.1. Data Generation

Experiments are carried out on six standard datasets obtained from the UCI repository: Liver, Wdbc, Wpbc, Diab, Iono, and Credit. The first four datasets are from medical applications where Wpbc and Wdbc are the Wisconsin breast cancer prognosis and diagnosis datasets, respectively. The Liver and Diab are liver disorder and diabetes datasets, respectively. The iono stands for the Ionosphere, which is from physics applications. The Credit dataset is from commerce applications.

Table 11 shows a brief description of each dataset. Most datasets from the UCI library [60] have no intrinsic measurement errors, test costs, and misclassification costs. In order help to study the performance of the feature selection algorithm, we will create these data for experimentations.

Step 1. Each dataset should contain exactly one decision attribute and have no missing value. To make the data easier to handle, data items are normalized from their value into a range from 0 to 1.

Step 2. We produce the for each original test according to (3). The is computed according to the value of databases without any subjectivity.
Three kinds of neighborhood boundaries of different databases are shown in Table 12. These neighborhood boundaries are the maximal, the minimal, and the average neighborhood boundaries of all attributes, respectively. The precision of can be adjusted through setting, and we set to be 0.01 in our experiments.

Step 3. We produce test costs, which are always represented by positive integers. For any , is set to a random number in [12, 55] subject to the uniform distribution.

Step 4. The misclassification costs are always represented by nonnegative integers. We produce the matrix of misclassification costs as follows:(1).(2) and are set to a random number in , respectively.

5.2. Efficiencies of the Two Algorithms

First, we study the efficiency of the backtracking algorithm. Specifically, experiments are undertaken with 100 different test cost settings. The search space and the number of steps for the backtracking algorithm are listed in Table 13. From the results, we note that the pruning techniques significantly reduce the search space. Therefore, the pruning techniques are very effective.

Second, from Table 13, we note that the number of steps does not simply rely on the size of the dataset. Wpbc is much larger than Credit; however, the number of steps is smaller. For some medium sized datasets, the backtracking algorithm is an effective method to obtain the optimal feature subset.

Third, we compare the efficiency of the heuristic algorithm and the backtracking algorithm. Specifically, experiments are undertaken with 100 different test cost settings on six datasets listed in Table 11. For the heuristic algorithm, is set to 1. The average and maximal run times for both algorithms are shown in Figure 1, where the unit of run time is on millisecond. From the results, we note that the heuristic algorithm is more stable in terms of run-time.

In a word, when we do not consider the run time, the backtracking algorithm is an effective method for many datasets. In real applications, when the run times of the backtracking algorithm are unacceptable, the heuristic algorithm must be employed.

5.3. Effectiveness of the Heuristic Algorithm

We let . The precision of can be adjusted through setting, and we let to be 0.01 on all datasets except Wdbc and Wpbc. The gets small neighborhood for Wdbc and Wpbc datasets; hence, we let for the two datasets. As mentioned earlier, the parameter plays an important role. The data of our experiments come from real applications, and the errors are not given by the dataset. In this paper, we consider only some possible error ranges.

The algorithm runs 100 times with different test cost settings and different setting on all datasets. Figure 2 shows the results of finding optimal factors. From the results, we know that the test cost plays a key role in this heuristic algorithm. As shown in Figure 2, the performance of the algorithm is completely different for different settings of . Data for are not included in the experiment results because respective results are incomparable to others. Figure 3 shows the average exceeding factors. These display the overall performance of the algorithm from a statistical perspective.

From the results, we observe the following:(1)the quality of the results is related to different datasets. It is because that the error range and heuristic information are all computed according to the values of dataset,(2)the results of the finding optimal factor are acceptable on most of datasets except Wdbc. The better results can be obtained through the smaller ; however, the number of selected features will be smaller,(3)the average exceeding factor is less than 0.08 in most cases. In other words, the results are acceptable.

5.4. The Results for Different Cost Settings

In this section, we study the changes of the minimal total cost for different misclassification cost settings. Table 14 is the optimal feature subset based on different misclassification costs for Wdbc dataset. The ratio of two misclassification costs is set 10 in this experiment.

As shown in this table, when the misclassification costs are low, the algorithm avoids undertaking expensive tests.

When the misclassification cost is too large compared with the test cost, the FSMC problem coincides with the MTR problem. Therefore, FSMC problem is a generalization of MTR problem.

In the last row of Table 14, the test cost of the subset [24, 31, 45, 55] equals the total cost; therefore, the misclassification cost is 0, and this feature subset is a reduct.

The changes of test costs versus the average minimal total cost are also shown in Figure 4. In real world, we could not select expensive tests when misclassification costs are low. Figure 4 shows this situation clearly. From the results, we observe the following.(1)As shown in Figures 4(a), 4(b), 4(e), and 4(f), when the test costs remain unchanged, the total costs increase linearly along with the increasing misclassification costs.(2)If the misclassification costs are small enough, we may give up the test. Figure 4(d) shows that when the misclassification costs are $30 and $300, the test cost is zero, and the total cost is the most expensive.(3)As shown in Figures 4(a) and 4(c), the total costs increase along with the increasing misclassification costs. The total costs remain the same when the total costs equal test costs.

6. Conclusions

In this paper, we built a new covering-based rough set model with normal distribution measurement errors. A new cost-sensitive feature selection problem is defined based on this model. This new problem has a wide application area for two reasons. One is that the resource that one can afford is often limited. The other is that data with measurement errors under considered is ubiquitous. A backtracking algorithm and a heuristic algorithm are designed. Experimental results indicate the efficiency of the backtracking algorithm and the effectiveness of the heuristic algorithm.

With regard to future research, much work needs to be undertaken. First, other realistic data models with neighborhood boundaries can be built. Second, the current implementation of the algorithm deals only with binary class problems that is the principal limitation. In the future, the extending algorithm needs to be proposed to cope with multivariate class problems. A third point to be considered in future research is that one can borrow ideas from [6163] to design other exhaustive and heuristic algorithms. In summary, this study suggests new research trends concerning covering-based rough set theory, feature selection problem, and cost-sensitive learning.

Acknowledgments

This work is in part supported by the National Science Foundation of China under Grant no. 61170128, the Natural Science Foundation of Fujian Province, China, under Grant no. 2012J01294, the State Key Laboratory of Management and Control for Complex Systems Open Project under Grant no. 20110106, and the Fujian Province Foundation of Higher Education under Grant no. JK2012028.