Table of Contents Author Guidelines Submit a Manuscript
Scientific Programming
Volume 2016, Article ID 8035089, 9 pages
http://dx.doi.org/10.1155/2016/8035089
Research Article

A Cost-Sensitive Sparse Representation Based Classification for Class-Imbalance Problem

1School of Computer and Information Security, Guilin University of Electronic Technology, Guilin 541004, China
2School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin 541004, China
3School of Automation, Beijing University of Posts and Telecommunications, Beijing 100876, China

Received 8 August 2016; Accepted 16 October 2016

Academic Editor: Kun Hua

Copyright © 2016 Zhenbing Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Sparse representation has been successfully used in pattern recognition and machine learning. However, most existing sparse representation based classification (SRC) methods are to achieve the highest classification accuracy, assuming the same losses for different misclassifications. This assumption, however, may not hold in many practical applications as different types of misclassification could lead to different losses. In real-world application, much data sets are imbalanced of the class distribution. To address these problems, we propose a cost-sensitive sparse representation based classification (CSSRC) for class-imbalance problem method by using probabilistic modeling. Unlike traditional SRC methods, we predict the class label of test samples by minimizing the misclassification losses, which are obtained via computing the posterior probabilities. Experimental results on the UCI databases validate the efficacy of the proposed approach on average misclassification cost, positive class misclassification rate, and negative class misclassification rate. In addition, we sampled test samples and training samples with different imbalance ratio and use -measure, -mean, classification accuracy, and running time to evaluate the performance of the proposed method. The experiments show that our proposed method performs competitively compared to SRC, CSSVM, and CS4VM.

1. Introduction

As a powerful tool for statistical signal modeling, sparse representation (or sparse coding) has been successfully used in pattern recognition fields [1], such as texture classification [2] and face recognition [3, 4], in the past few years. In [3], John et al. proposed a sparse representation based classification (SRC) method when they solve the face recognition under various illuminations and occlusions, which represents an input test image as a sparse linear combination of training images and assigned the test image to the class whose training samples can best reconstruct it. In theirs work, they used -regularizer rather than -regularizer to regularize the objective function and then calculated the residuals between the original test sample and the reconstructed one to identify the query image’s label. Such a sparse representation based classification framework has achieved a great success in face recognition and has boosted the research of sparsity related machine learning methods.

Traditional classification algorithms [5], including SRC, are designed to achieve the lowest recognition errors and assume the same losses for different types of misclassifications. However, this assumption may not be suitable for many real-world applications. For example, it may cause inconvenience to a gallery who is misclassified as an impostor and not allowed to enter the room controlled by a face recognition system but may result in a serious loss if an impostor is misclassified as a gallery and allowed entering the room. In such settings, the loss of misclassification should be taken into consideration, and “cost” information can be introduced to measure the severity of misclassification. In recent years, many cost-sensitive methods have been proposed. The typical works include the Cost-Sensitive Semisupervised Support Vector Machine (CS4VM) and Cost-Sensitive Laplacian Support Vector Machines (CSLSVM) proposed by Zhou et al. [6, 7], a cost-sensitive Naïve Bayes method from a novel perspective of inferring the order relation [8] proposed by Fang et al., and novel cost-sensitive approach proposed by Castro and Braga to improve the performance of multilayer perceptron [9]. In [10], an instance weighting method was incorporated into various Bayesian network classifiers. The probability estimation of Bayesian network classifiers was modified by the instance weighting method, which made Bayesian network classifiers cost-sensitive. In [11], Lo et al. presented a basis expansions model for multilabel classification to handle the cost-sensitive multilabel classification problem, where a basis function is an LP classifier trained on a random -label set. In [12], Wan et al. proposed a cost-sensitive feature selection method called Discriminative Cost-Sensitive Laplacian Score (DCSLS) for face recognition, which incorporated the idea of local discriminant analysis into Laplacian Score.

Cost-sensitive learning always coexists with class-imbalance in most applications with the goal of minimizing the total misclassification cost [13]. Class-imbalance has been considered as one of the most challenging problems in machine learning and data mining. The ratio of imbalance (the size of majority class to minority class) can be as huge as 100, even up to 10000. Much work has been done in addressing the class-imbalance problem. Cost-sensitive learning is an effective method to deal with the imbalance data classification problem. In recent year, cost-sensitive learning has been studied widely and become one of the most important topics for solving the class-imbalance problem. In [14], Zhou and Liu studied empirically the effect of sampling and threshold-moving in training cost-sensitive neural networks and revealed that threshold-moving and soft-ensemble are relatively good choices in training cost-sensitive neural networks. There are also some other cost-sensitive learning methods by improving the existed method. In [15], Sun et al. proposed a cost-sensitive boosting algorithms, which are developed by introducing cost items into the learning framework of AdaBoost. Another strategy for class-imbalance problem is based on exchanging the distribution of data sets. In [16], Jiang et al. proposed a novel Minority Cloning Technique (MCT) for class-imbalanced cost-sensitive learning. MCT alters the class distribution of training data by cloning each minority class instance according to the similarity between it and the mode of the minority class. Generally, users focus more on the minority class and consider the cost of misclassifying a minority class to be more expensive. In our study, we adopt the same strategy to address this problem.

In [17], a probabilistic cost-sensitive classifier was proposed for face recognition; they utilize the probabilistic model to estimate the posterior probability of a testing sample and calculate all the misclassification losses via the posterior probabilities. Motivated by this probabilistic model and probabilistic subspace clustering [1719], we proposed a new method to handle misclassification cost. In sparse representation, it will play an important role for reconstruction if the value of coefficient is higher [20]. In other words, the coefficient is 1 when a query sample was represented by a dictionary with the same sample as the query one. Just like Gaussian distribution, a sample that is close to the mean vector has a higher probability. Inspired by probabilistic model, we use coefficient matrix to calculate the posterior probabilities rather than the distribution of noise (residual) in [17] and they have to estimate the distribution of noise. The main advantage of our method is to reduce the computation complexity and computation cost, and the contribution of the proposed method is obtaining the posterior probability by coefficient vector of sparse representation. After calculating all the misclassification losses via the posterior probabilities, the test sample is assigned to the class whose loss is minimal. Experimental results on UCI databases validate the effectiveness and efficiency of our methods.

This paper is organized as follows. Section 2 outlines the details of the relevant method. Section 3 presents the details of the proposed algorithm. Section 4 reports the experiments. Finally, Section 5 concludes the paper and offers suggestions for future research.

2. Related Works

In this section, we briefly introduce some related works, including sparse representation based classification and cost-sensitive learning framework.

2.1. Sparse Representation Based Classification

Sparse representation is a typically method in machine learning [3, 21, 22], which is to use labeled training samples from distinct object classes to learn a dictionary and determine the label of an unseen new test sample correctly. We denote the data set with training samples from the th class as a matrix and is the number of all training samples, where is the number of classes in training set. Given sufficient training samples of the th class, any test sample from the same class will be approximately represented linearly by the training samples of class : Then, rewrite the above representation of in matrix form as , where . Then, define a new matrix for the entire training set as follows:Many method based distances are not robust in real-world applications because of various occlusions. To overcome this limitations, Wright introduced the sparse representation based classification method to represent the query image. Then, the linear representation of can be rewritten in terms of all training samples aswhere , whose entries are zero except those associated with the th class. This motivates us to seek the sparsest solution by solving the following optimization problem: where denotes the -norm, which counts the number of nonzero entries in a vector. However, the above problem of finding the sparsest solution (-norm minimization problem) is nonconvex and actually NP-hard. Generally, if the solution sought is sparse enough, the solution of the -minimization problem is equal to the solution of the following -minimization problem [4, 22, 23]: The real data are noisy; it may not represent the test sample exactly. To deal with the noises, John et al. extended the -norm minimization problem to the following formulation:where is a noise term with bounded energy . The sparse solution can still be obtained by solving the following stable -minimization problem:To better harness such linear structure, they instead classify based on how well the coefficients associated with all training samples of each object reproduce . Let be the solution of (7), for each class , let be the characteristic function that selects the coefficients associated with the th class. Using the coefficients, one can approximate the given test sample as , where . They then compute the residual (Euclidean distance) between and :The label of the test sample can be identified by minimizing as follows:

2.2. Cost-Sensitive Function

In multiclass cost-sensitive learning, considering gallery subjects with their class labels , many impostors, whose labels are . In [7], Zhang and Zhou categorized the costs into three types: cost of false acceptance , cost of false rejection , and cost of false identification . Empirically, it is evident that , , and are unequal. Give a cost setting according to the users and reassign , , and . Here, for the ease of understanding, we still preserve the original formulation. We can construct a multiclass cost matrix as shown in where indicates the cost of misclassifying a sample of the th class as the th class. The diagonal elements of are all zero since there is no loss for correct recognition.

Cost-sensitive learning usually sets the misclassification cost as objective function and identifies the label by minimizing loss function. Given a test sample and its predicted class label as , respectively, the label is obtained by minimizing the objective function:wherewhere is the optimal prediction of and represents the gallery subjects in classification problem.

3. Cost-Sensitive SRC

In [5], Alpaydın calculated the residuals to identify the class label of a test sample , which is the Euclidean distance between reconstructed sample and the original test sample . In cost-sensitive learning, the loss function (see (7)) is regarded as an objective function to identify the label of a test sample. In binary classification problem, there are two misclassification costs, and we denote the cost that misclassifies positive class as negative class by and the cost by conversely. Then a cost matrix can be constructed as shown in where and represents the label of minority class and majority class, respectively.

It is well known that the loss function can be related to the posterior probability . Then the loss function can be rewritten as follows: The test sample belongs to the class with higher probability. Now, we will estimate , .

In coefficient matrix, the larger the element value is, the more important the role it will play for reconstructing a test sample. In other words, it is best to represent the test sample by training samples and they have the same class label, and there are no samples from different class in this linear combination. The posterior probability can be related to the coefficient matrix. Accordingly, we rewrite the solution of (7) as , where and represent the positive class coefficient and negative class coefficient, respectively. Here, is the number of positive samples and is the number of negative samples in dictionary. Then, we can obtain the posterior probabilities:where . Then, (14) can be written asWe can obtain the label of a test sample by minimizing (16):The whole process of CSSRC is described in Algorithm 1.

Algorithm 1 (CSSRC algorithm).
Input. Dictionary , test sample
Output. The label of test sample (1)Normalize the columns of to unit -norm(2)Solve the -minimization problem:Or alternatively, solveAssume the solution is (3)Calculate the loss function:where (4)Obtain the label of :

4. Experiments

4.1. Data Sets and Experimental Setting

We test the proposed method on seven UCI data sets [24]. Detailed information about these data sets is summarized in Table 1.

Table 1: Description of data sets.

In cost-sensitive learning, false positive (actual negative but predicted as positive, denoted as FP), false negative (actual positive but predicted as negative, FN), true positive (actual positive and predicted as positive, TP), and true negative (actual negative and predicted as negative, TN) can be given in a confusion matrix as follows:

To binary classification problems, four kinds of misclassification cost are needed, which is referred to as CTP, CFP, CTN, and CFN, respectively. CTP and CTN are the costs of true positive (TP) and true negative (TN). In order to simplify the cost matrix, we set CTP = 0 and CTN = 0. CFN and CFP are the costs of false negative (FN) and false positive (FP). We always assume that the cost of misclassifying positive class instances is much higher than the cost of misclassifying negative class instances, so we set CFN ≫ CFP. In this paper, CFP is set to be a unit cost of 1; CFN is assigned different values: , respectively. In our experiments, we adopt 10-folder cross-validation to get the average cost, and three evaluation criteria are adopted to evaluate the classification performance in cost-sensitive experiments: average cost (AC), error rate of false acceptance (Err(IG)), and error rate of false rejection (Err(GI)). For class-imbalance problem, we choose -measure and -mean to evaluate the performance. They are defined as follows [25, 26]:where and represent the number of false acceptances and false rejections, respectively. , , and represent the number of test samples, positive class samples, and negative class samples, and .

In order to illustrate the performance of CSSRC, sparse representation based classification (SRC), Cost-Sensitive Support Vector Machine (CSSVM), and Cost-Sensitive Semisupervised Support Vector Machine (CS4VM) are chosen to compare the performance on three experiments. The experiments are performed on Matlab 2014a and the computer with a 2.6 GHz Intel Xeon CPU.

4.2. The Effect of Cost for SRC

For data set Housing, the size is smaller than the other six data sets, so less samples are selected for train set and test set. We select 31 positive samples and 31 negative samples randomly from Housing as test samples and 41 positive samples and 41 negative samples as training samples. We select 61 positive samples and 61 negative samples as test samples from Abalone, Nursery, Letter, Pima, Cmc, and Car and 101 positive samples and 101 negative samples as training samples. We repeat sampling 100 times and get the average results.

Experiment 1. We compare the performance of these 4 methods (CSSRC, SRC, CSSVM, and CS4VM) on Abalone, Nursery, Letter, Pima, Cmc, Housing, and Car. We set cost ratio (the cost of false acceptance respect to false rejection) as 10, and the results are summarized in (22). From Table 2, we can see that the proposed cost-sensitive approach achieves lower average misclassification cost than the other three methods on Abalone, Nursery, Letter, Pima, Housing, and Car except Cmc. CSSRC’s average cost is higher than CS4VM but lower than the other two methods on Cmc and lower than CS4VM on the other 6 data sets. The average cost of CSSRC is 0.5122 and CS4VM’s average cost is 0.5105. They are in the same order of magnitudes. In other words, our method has better performance than SRC, CSSVM, and CS4VM.

Table 2: Average cost of the four methods (cost ratio 1 : 10).

Experiment 2. According to the results in Experiment , we plot two pictures from Figure 1. For either positive class or negative class, the proposed method can achieve a lower error rate on Nursery and Abalone when the cost ratio ranges from 5 to 50. Although CS4VM can obtain a lower error rate of false rejection, its error rate of false acceptance is very high, this can generate a serious total cost. From Figure 1, we can easily find that our method can achieve lower error rate of false rejection and lower error rate of false acceptance simultaneously.

Figure 1: Error rate of false acceptance and false rejection.

Experiment 3. In this section, we set cost ratio from 10 to 50, and the results are summarized as Table 3. The first row is coat ratios and the first two columns represent data sets and classification methods, respectively. In Experiment we use merely two data sets, for proving the robust of our method; more data sets are adopted in this experiment. Our proposed cost-sensitive SRC achieved a lower average costs on four data sets. Although it is not the lowest cost on Nursery and Letter, it has the same order of magnitude as the lowest cost value.

Table 3: Average cost of methods on five data sets with cost ratio from 10 to 50.

The above there experiments have proved the effect of cost term for SRC. Particularly, the comparison of SRC and CSSRC can well validate the conclusion that the cost term can improve the performance of SRC.

4.3. Solving Class-Imbalance Problem

Experiment 1. In this section we will solve the class-imbalance problem. Table 1 has summarized the information of data sets we used, and the imbalance ratio higher 10 is Nursery and Letter. In order to set a higher imbalance ratio, we select Nursery in this experiment. Similarly, we compare the performance of these four methods (SRC, CSSVM, CS4VM, and CSSRC) on Nursery. It is difficult to reflect the performance of our method for class-imbalance problem, and -measure, -mean, and classification accuracy have been adopted for the class-imbalance problem. In this experiment, we take the imbalance ratio from , respectively. The size of minority class is 30 and the majority class is 30 multiplying the imbalance ratios in training set, accordingly. We select 61 positive samples and 61 negative samples as test set and run and summarize the results as in Figures 2 and 3; the sampling process has repeated 100 times and gets the average results.
Figure 2 shows the results of -measure on Nursery, and the definition of -measure (the harmonic mean between the classification accuracy of positive class and the classification accuracy of negative class) has been shown in Section 4.1. It is obvious that our method has achieved a higher -measure value with respect to sparse representation based classification, Cost-Sensitive Support Vector Machine, and Cost-Sensitive Semisupervised Support Vector Machine. Moreover, the method we proposed achieved a more stable performance with the increasing of imbalance ratio. Similarly, -mean (the geometric mean between the classification accuracy of positive class and the classification accuracy of negative class) also achieved a higher value with respect the other three methods in Figure 3.
It is difficult to evaluate the performance of methods solving class-imbalance problem, but we use classification accuracy to reflect the method additionally for persuasive, and this is summarized in Table 4. On the other hand, running time represents the computation cost of a method. The result is shown in Table 5. It is obvious that our method can get the highest classification accuracy and the lowest running time on Nursery. In this paper, we use sparse representation coefficient vectors to estimate the posterior probability; this can well reduce the computing complexity and computation cost.

Table 4: Classification accuracy on Nursery.
Table 5: Running time on Nursery.
Figure 2: The result of -measure on Nursery.
Figure 3: The result of -mean on Nursery.

Experiment 2. In this experiment, we intend to validate the applicability of our method for class-imbalance problem. In Experiment we have tested the validity of our method when the class distribution of training samples is imbalance. Now, we will select some training samples to validate our method, where the distribution of training samples is imbalance. Table 1 has summarized the information of data sets we used, and the imbalance ratio of Letter is 24.3. In order to set a higher imbalance ratio, we select Letter in this experiment. Similarly, we compare the performance of these four methods (SRC, CSSVM, CS4VM, and CSSRC) on Nursery. In this experiment, we take the imbalance ratio from , respectively. The size of minority class is 30 and the majority class is 30 multiplying the imbalance ratios in training set, accordingly. We select 61 positive samples and 61 negative samples as test set and run and summarize the results as Figures 4 and 5; the sampling process has repeated 100 times and gets the average results.
Figure 4 has shown the -measure with imbalanced training samples; Figure 5 has shown the -measure with imbalanced test samples. It is obvious that our method achieves a stable and higher result on Letter than the other three methods from Figures 4 and 5. Although sparse representation based classification has a similar result of -measure with our method in Figure 5, the running time is higher than our method in Table 6. Much experiments has been done in this section, we have compared -measure with the imbalanced distribution of training samples and testing samples and running time, we can easily make a conclusion that our method is better than the other three methods, and we can well resolve the class-imbalance problem.

Table 6: Running time on Letter.
Figure 4: The result of -measure on Letter (the distribution of training samples is imbalanced).
Figure 5: The result of -measure on Letter (the distribution of test samples is imbalanced).

5. Conclusions and Future Works

This paper, we propose a novel cost-sensitive SRC classifier approach. The proposed approach adopted probabilistic model and sparse representation coefficient matrix to estimate the prior probability and then obtain the label of a testing sample by minimizing the misclassification losses. The experimental results show that the proposed cost-sensitive SRC has a comparable or even lower total cost with higher accuracy compare to the other three classification algorithms. Much experiment has been done and concluded that our method can well solve the class-imbalance problem. In real-world application, nearly all the data sets are class-imbalance. Our research can overcome the difficult the imbalanced distribution of data sets brought in.

In order to simplify the cost matrix, we restrict our discussion to two-class problems. So extending our current work to multiclass scenario is a main research direction for our future work.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant nos. 61562013 and 21365008), Natural Science Foundation of Guangxi (Grant no. 2013GXNSFBA019279), Innovation Project of GUET Graduate Education (no. YJCXS201558), and the Center for Collaborative Innovation in the Technology of IOT and the Industrialization (WLW20060610).

References

  1. J. Mairal, M. Elad, and G. Sapiro, “Sparse representation for color image restoration,” IEEE Transactions on Image Processing, vol. 17, no. 1, pp. 53–69, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. J. Mairal, J. Ponce, G. Sapiro et al., “Supervised dictionary learning,” in Proceedings of the 21th Nation Conference on Advances in Neural Information Processing Systems, vol. 1, pp. 1033–1040, Vancouver, Canada, December 2008.
  3. W. John, Y. A. Yang, G. Arvind et al., “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 44, no. 12, pp. 2368–2378, 2014. View at Google Scholar
  4. M. Yang and L. Zhang, “Gabor feature based sparse representation for face recognition with gabor occlusion dictionary,” in Computer Vision—ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5–11, 2010, Proceedings, Part VI, vol. 6316 of Lecture Notes in Computer Science, pp. 448–461, Springer, Berlin, Germany, 2010. View at Publisher · View at Google Scholar
  5. E. Alpaydın, “Machine learning,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 3, no. 3, pp. 195–203, 2011. View at Publisher · View at Google Scholar
  6. Y. Li, J. T. Y. Kwok, and Z. H. Zhou, “Cost-sensitive semi-supervised support vector machine,” in Proceedings of the 24th National Conference on Artificial Intelligence, vol. 1, pp. 500–505, Atlanta, Ga, USA, July 2010.
  7. Y. Zhang and Z.-H. Zhou, “Cost-sensitive face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 10, pp. 1758–1769, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. X. Fang, “Inference-based naïve bayes: turning naïve bayes cost-sensitive,” IEEE Transactions on Knowledge & Data Engineering, vol. 25, no. 10, pp. 2302–2313, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. C. L. Castro and A. P. Braga, “Novel cost-sensitive approach to improve the multilayer perceptron performance on imbalanced data,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 6, pp. 888–899, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Jiang, C. Li, and S. Wang, “Cost-sensitive Bayesian network classifiers,” Pattern Recognition Letters, vol. 45, pp. 211–216, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. H.-Y. Lo, S.-D. Lin, and H.-M. Wang, “Generalized k-labelsets ensemble for multi-label and cost-sensitive classification,” IEEE Transactions on Knowledge & Data Engineering, vol. 26, no. 7, pp. 1679–1691, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. J. Wan, M. Yang, and Y. Chen, “Discriminative cost sensitive laplacian score for face recognition,” Neurocomputing, vol. 152, pp. 333–344, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. R. Pearson, G. Goney, and J. Shwaber, “Imbalanced clustering for microarray time-series,” in Proceedings of the Workshop on Learning from Imbalanced Dataset II (ICML '03), p. 3, Washington, DC, USA, 2003.
  14. Z.-H. Zhou and X.-Y. Liu, “Training cost-sensitive neural networks with methods addressing the class imbalance problem,” IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 1, pp. 63–77, 2006. View at Publisher · View at Google Scholar · View at Scopus
  15. Y. Sun, M. S. Kamel, A. K. C. Wong, and Y. Wang, “Cost-sensitive boosting for classification of imbalanced data,” Pattern Recognition, vol. 40, no. 12, pp. 3358–3378, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. L. Jiang, C. Qiu, and C. Li, “A novel minority cloning technique for cost-sensitive learning,” International Journal of Pattern Recognition & Artificial Intelligence, vol. 29, no. 4, Article ID 1551004, 2015. View at Publisher · View at Google Scholar · View at Scopus
  17. J. Man, X. Jing, D. Zhang, and C. Lan, “Sparse cost-sensitive classifier with application to face recognition,” in Proceedings of the 18th IEEE International Conference on Image Processing (ICIP '11), pp. 1773–1776, IEEE, Brussels, Belgium, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Lu and Y.-P. Tan, “Cost-sensitive subspace learning for face recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 2661–2666, IEEE, San Francisco, Calif, USA, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Lu and Y.-P. Tan, “Cost-sensitive subspace analysis and extensions for face recognition,” IEEE Transactions on Information Forensics & Security, vol. 8, no. 3, pp. 510–519, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. M. Z. Kukar and I. Kononenko, “Cost-sensitive learning with neural networks,” in Proceedings of the 13th European Conference on Artificial Intelligence (ECAI '98), pp. 445–449, Brighton, UK, August 1998.
  21. J. Wang, C. Lu, M. Wang, P. Li, S. Yan, and X. Hu, “Robust face recognition via adaptive sparse representation,” IEEE Transactions on Cybernetics, vol. 44, no. 12, pp. 2368–2378, 2014. View at Publisher · View at Google Scholar · View at Scopus
  22. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207–1223, 2006. View at Publisher · View at Google Scholar · View at Scopus
  23. D. L. Donoho, “For most large underdetermined systems of equations, the minimal l1-norm near-solution approximates the sparsest near-solution,” Communications on Pure & Applied Mathematics, vol. 59, no. 7, pp. 907–934, 2006. View at Publisher · View at Google Scholar · View at Scopus
  24. C. Blake, E. Keogh, and C. J. Merz, UCI Repository of Machine Learning Databases, Department of Information and Computer Science, University of California, Irvine, Calif, USA, 1998.
  25. B. G. Hu and W. M. Dong, “A study on cost behaviors of binary classification measures in class-imbalanced problems,” Computer Science, vol. 8, no. 11, Article ID e79774, 2014. View at Google Scholar
  26. Z. M. Kukar and I. Kononenko, “Cost-Sensitive Learning with Neural Networks,” 445–449, 1998.