Abstract

Linear regression (LR) and its variants have been widely used for classification problems. However, they usually predefine a strict binary label matrix which has no freedom to fit the samples. In addition, they cannot deal with complex real-world applications such as the case of face recognition where samples may not be linearly separable owing to varying poses, expressions, and illumination conditions. Therefore, in this paper, we propose the kernel negative dragging linear regression (KNDLR) method for robust classification on noised and nonlinear data. First, a technique called negative dragging is introduced for relaxing class labels and is integrated into the LR model for classification to properly treat the class margin of conventional linear regressions for obtaining robust result. Then, the data is implicitly mapped into a high dimensional kernel space by using the nonlinear mapping determined by a kernel function to make the data more linearly separable. Finally, our obtained KNDLR method is able to partially alleviate the problem of overfitting and can perform classification well for noised and deformable data. Experimental results show that the KNDLR classification algorithm obtains greater generalization performance and leads to better robust classification decision.

1. Introduction

Least squares regression (LSR) has been widely used for many fields of pattern recognition and computer vision. Owing to LSR being mathematically tractable and computationally efficient, in the past, many variants have been proposed. Notable LSR algorithms include weighted LSR [1], partial LSR [2], and other extensions (e.g., nonnegative least squares (NNLS) [3]). In the pattern recognition community, LSR is also referred to as minimum squared error algorithm [46]. Moreover, very competent extensions of least squares regression such as regularized least squares regression [7] are also proposed. Among extensions of least squares regression, sparse regression [8] and low-rank regression [9, 10] can obtain notable performance. The relationship between the regression and other methods such as locally linear embedding and local tangent space alignment is also studied [11]. In addition, LSR is also applied to semisupervised learning. Nie et al. [12] proposed adaptive loss minimization for semisupervised elastic embedding. Fang et al. [13] proposed learning a nonnegative sparse graph for linear regression for semisupervised learning, in which linear regression and graph learning were simultaneously performed to guarantee an overall optimum.

LSR can be simply described as follows. Before conventional least squares regression (CLSR) is applied for classification [3, 14, 15], it assigns different fixed class labels to training samples of different classes. Then it employs the least squares regression algorithm to achieve a mapping that is able to transform training samples into approximations of their class labels. Finally CLSR uses the obtained mapping to predict the class label of every test sample. In addition to classification problems, least squares regression is also applied to subspace segmentation [16], matrix recovery [17], and feature selection [18].

The sparse representation classification (SRC) [1921], recently proposed, can be regarded as a special form of least squares regression. Differing from LSR, it achieves an approximation of a test sample via a sparse linear combination of all training samples. Also collaboration representation [22] and linear regression classification [23] are similar. An overview of sparse representation is provided in [24]. However, for classification tasks, because SRC must solve a set of equations for classifying every sample, CLSR is computationally much more efficient than SRC.

Xiang et al. proposed discriminative least squares regression (DLSR) [25]. The core idea is, under the conceptual framework of least squares regression, to achieve a larger class margin than the class margin obtained using CLSR for classification algorithms by using the dragging technique, which plays a similar role in enlarging the margin as other large margin classifiers proposed in [2628]. The idea of using slack variable to relax the model has been widely used in the related field [29]. When the distribution of training samples is in accordance with that of test samples, the classifier learned from training samples can well adapt to test samples. Under the condition, since the classifier learned from training samples has a very large class margin, it can also obtain a satisfactory class margin for test samples. Accordingly the original dragging technique can perform well. In other words, a high classification accuracy can be produced. However, in real-world applications, owing to the noise or deformability of the object, the difference between training samples and test samples from the same class may be much. For example, it is well known that face images are a kind of deformable objects (owing to varying poses, expressions, and illumination conditions). Two-face images from the same subject have much difference. This difference may be even greater than that of two-face images obtained from two distinctive subjects. In this case, a large margin classifier obtained by using training samples is not usually suitable for test samples. In other words, it probably performs badly in classifying the test samples. On the contrary, reducing the class margin usually achieves better classification accuracy for classification problems on noised data. Thus, we focus on determining a proper margin by using the negative dragging technique and producing a robust classifier for pattern classification on noised and deformable data.

Furthermore, we focus on introducing the kernel trick to improve the dragging linear regression. In machine learning, the kernel trick is originally utilized to construct nonlinear support vector machines (SVMs) [3032]. In the last more than 10 years, many kernel based approaches have been proposed, such as well-known kernel principal component analysis (KPCA) [33, 34] and kernel Fisher discriminant analysis (KFDA) [35]. For classification, Yu et al. presented the kernel nearest neighbor (KERNEL-NN) classifier [36]. KERNEL-NN applies the nearest neighbor classification method in the high dimensional feature space. The KERNEL-NN classifier could perform better than the NN classifier by utilizing an appropriate kernel. Kernel sparse representation classification (KSRC) is also presented [37, 38]. So far, by using kernel tricks [39], almost all linear learning methods can be generalized to the corresponding nonlinear ones. The kernel trick [40] goes a large step toward the goal of classifying heterogeneous data. These kernel based algorithms improve the computational ability of the linear algorithms. They first implicitly map the data in the input space into a high or even infinite dimensional kernel feature space [18, 41] by a nonlinear mapping and then perform linear processing in the kernel feature space by using the inner products, which can be computed by a kernel function. As a result, these kernel based algorithms perform a nonlinear transformation with respect to the input space.

As is well known, kernel approach can change the distribution of samples by the nonlinear mapping. If an appropriate kernel function is utilized, kernel approach is able to make the data of different classes more linearly separable. Therefore, kernel based algorithms can perform classification well. This motivated us to integrate kernel method into linear regression for classification. If an appropriate kernel function is utilized, more samples from the same class are close to each other and samples from distinct classes are far from each other in the high dimensional feature space. Hence, in the high dimensional feature space, it is easy to learn a mapping that can well convert training samples into their class labels. Namely, linear transformation matrix learned in the high dimensional feature space can more appropriately map samples into their class labels and has more powerful discriminating ability.

Based on the above two aspects, we propose the kernel negative dragging linear regression (KNDLR) method in this paper. For KNDLR, samples are implicitly mapped into a high dimensional feature space first, and then linear regression with the negative dragging is performed in this new feature space. We prove that KNDLR in the high dimensional feature space can be formulated in terms of the inner products, while the inner products could be computed by kernel function. Thus KNDLR is easy to be implemented and has low computation cost. The classifier can generalize well because we propose and use the negative dragging technique, and kernel approach is also integrated into KNDLR. Comprehensive experiments demonstrated the superior characteristics of KNDLR. In summary, the contributions of the proposed method are as follows.

(1) It relaxes the strict binary label matrix that is used in conventional LR into a slack variable matrix which has more freedom to fit the sample. The proper margins between different classes are achieved by using the negative dragging technique. Previously researchers usually focus on enlarging the margin between different classes, whereas the negative dragging technique proposed by us seems to be a new contrary idea, which is useful to overcome the overfitting problem and to enhance the robustness of the algorithm on unseen samples, for example, test samples.

(2) The kernel approach is also integrated into our method. We show that KNDLR in the high dimensional feature space can be formulated in terms of the inner products, and the inner products could be computed by the kernel function. Thus KNDLR only needs to calculate the kernel function rather than directly calculating data in the high dimensional feature space corresponding to the kernel function.

(3) An algorithm named KNDLR is devised for the proposed method. The validity of the algorithms is tested on six image datasets.

The other parts of the paper are organized as follows. Section 2 briefly reviews works related to this paper. In Section 3, our method is presented. In Section 4, analysis of our method is provided. Experimental results are reported in Section 5. Finally, Section 6 offers the conclusion of this paper.

In this section, we first introduce the CLSR for classification. Then, the kernel trick is briefly reviewed.

2.1. Conventional Least Squares Regressions for Classification

The collection of training samples is represented as a matrix . is a training sample in the form of column vector. If the training sample is a two-dimensional image, then it is converted into one column vector in advance. The objective function of conventional least squares regression (CLSR) for classification is as follows:where ( is the number of class) is the binary class label matrix and the th row of is the class label vector of the th sample.

For a three-class classification problem, in CLSR the class label matrix of four samples may be indicates that the first and second samples are from the first class, the third sample is from the third class, and the fourth sample is from the second class. is the transformation matrix which converts the sample matrix into the class label binary matrix . stands for Frobenius norm of matrix. In the above CLSR for classification, the class label is predefined and fixed.

2.2. Kernel Trick

The kernel trick is a very powerful technique in machine learning. It has been successfully applied to many methods, such as SVM [31, 32], KPCA [33, 34], and KFDA [35]. By using kernel tricks, a linear algorithm can be easily generalized to a nonlinear algorithm.

Mercer kernel is generally used in kernel methods. It is a continuous, symmetric, positive semidefinite kernel function. Given a Mercer kernel , there is a unique associated reproducing kernel Hilbert space (RKHS) . Usually, a Mercer kernel can be expressed aswhere denotes the transpose of a matrix or vector, and are any two points in , and is the implicit nonlinear mapping associated with the kernel function . When implementing kernel methods, we do not need to know what is and just adopt the kernel function defined as (3). Here the kernel function is the connection between the learning algorithm and data. The linear kernels, polynomial kernels, Gaussian radial basis function (RBF) kernels, and wavelet kernels [18, 40, 41] are commonly used kernels in kernel methods. The polynomial kernel has the form ofwhere is a constant, is the order of polynomial, and RBF kernels can be expressed aswhere is the parameter for RBF kernels and is the distance between two vectors.

3. Our Method

3.1. Solving the Optimization Model

Training samples in the input space are represented as a matrix . Let be the nonlinear mapping function corresponding to a kernel . Firstly, we implicitly employ to map the data from input space to a high dimensional kernel feature space . We have

Then, for classification, we should transform samples set to a class label matrix. But the class label matrix in CLSR is a strict binary label matrix which has less freedom to fit the samples. It is expected that the original strict binary constraints in can be relaxed into the soft constraint so that it has more freedom to fit the samples and simultaneously produce a classifier with well generalization. To this end, the slack variable matrix which is different from in DLSR is used to substitute for the original class label matrix . The four samples in Section 2.1 are also taken as an example here and then the slack variable class label matrix is defined as follows:

It can be seen that can help to properly reduce the class margins of CLSR to generalize well. Formally, let be a dragging matrix and defined asMeanwhile, let be the dragging coefficient matrix and defined asthen , where is a Hadamard product operator of matrices. Relaxing into has an idea opposite to that of the dragging technique in DLSR; therefore we call this relaxation the negative dragging.

By virtue of the kernel feature space , our method tries to construct a bridge between and . In particular, our goal is to learn a linear function that makes be approximately satisfied. Thus our method has the following objective function: where is the transform matrix and is a positive regularization parameter.

Since is relaxed into , (10) has more freedom than (1) to fit the samples. Based on the knowledge of Linear Algebra, we know that

It is easy to prove that objective function (10) is convex. Thus it has a unique solution. An iterative updating algorithm is devised to solve it. The first step of the algorithm is to solve by fixing .

Theorem 1. Given , the optimal in (10) can be calculated as

Proof. According to matrix theory, the optimal can be obtained by making the derivation of (10) with respect to and set it to zero. That is,The second step of our algorithm is to solve by fixing . Then (10) can be rewritten as . can be obtained by solving the following optimization problem:whereConsidering the th row and th column element of , we haveAccording to [25], the formula to calculate isTherefore, the optimal solution of isIn a word, the first step of the algorithm is to solve by fixing , and the second step of the algorithm is to solve by fixing . In other words, (12) should be calculated in the first step, and (15) and (18) should be calculated in the second step. These two steps should be repeatedly calculated till the termination condition is satisfied.

3.2. Integrating the Kernel Trick into the Optimization Model

As mentioned above, we should repeatedly calculate (12) and (18). However, for (12) and (18), exists in kernel feature space . Fortunately, we do not need to know what is and just adopt the kernel function (3). How to use the kernel function to eliminate denotation is presented as follows.

Let

By using the following formula [42] on matrix manipulations:we use , , and instead of , , and , respectively, havingThen, we substitute it into (12); thereforewhere .

Actually, in (23) is changeless because it only depends on and the utilized kernel function, while is changeable during the iteration; hence, for avoiding to directly calculate , in the first step we only need to calculate

The second step of algorithm is to solve by calculating (15) and (18). By substituting (23) into (15), we have

Hence, in the second step we need to calculate (25) and (18).

Then the predicted label for a test sample is

Intuitively, should be calculated by iteration and then it is utilized to calculate the predicted label for test sample . However, by substituting (23) into (26), we have where .

Because depends on and the utilized kernel function, we only need to calculate out by the iteration, and after the iteration is performed, the predicted label for a test sample can be obtained by (27). As presented above, directly calculating can be avoided by utilizing the kernel function.

In summary, we do not need to know what is and just adopt the kernel function during the iteration. The complete algorithm is summarized in Algorithm 1.

Input: Training samples matrix ; Label matrix ; dragging coefficient matrix ; test
sample ; parameter ;
Output: the slack variable class label matrix ; predicted class for test sample ;
Initialization: ;
Calculate ;
Set threshold ; Set .
Repeat
Given , calculate .
Utilize , then calculate .
Until the absolute value of the difference between objective functions of two consecutive
loops is smaller than threshold .
For test sample , calculate .
If , then is classified into the th class. is the th entry of .
Output: the transformation matrix , .

4. Analysis of Our Method

In our method, the negative dragging technique and kernel trick are simultaneously integrated into the LR model to obtain more robust classification result for noised and deformable data. We analyze our method from two aspects.

Firstly, we present the class margins of our method, DLSR, and CLSR for classification. For simplicity of description, the four samples in Section 2.1 are also taken as an example here. For our method, it is clear that For DLSR, Suppose that and have the same components. For the first and third samples (they, respectively, belong to the first and third classes), the distance between their class labels can be denoted by

For DLSR, the distance between the class labels of the first and third samples can be denoted by

For CLSR, the distance between their class labels can be denoted by .

We see that if and have same components, DLSR has the largest class margin whereas our method usually has the smallest class margin. In other words, we usually have . Actually, because , it is absolutely certain that . As for , it can be demonstrated below. First, . Because is usually satisfied, we can ignore the second-order terms and have . As a result, in the scenario of noised and deformable data, our method can effectively decrease the probability that the classifier learned from training samples too fits training samples and cannot be well applicable for test samples. In other words, our method can make the obtained classifier generalize well and is very suitable for the classification of noised and deformable data.

Secondly, we present effects of the kernel trick integrated into our method. In some real-world applications, samples from different classes are mixed up and are not linearly separable, because the difference between training samples from the same class may be much more than the difference between training samples from different classes. For instance, in the face recognition problem, the face images from the same person may be more different than the face images from distinct persons owing to variable expressions, poses, and illuminations. This is known as the problem of uncertain data [42, 43]. Under this situation, both CLSR and DLSR could not attain a good classification performance. The kernel approach can change the distribution of samples by the nonlinear mapping. If an appropriate kernel function is utilized, the kernel approach can make linearly nonseparable samples become linearly separable. The term linearly separable means that samples of different classes have good separability. Exactly, it is referred to as a linear boundary such as a line or plane that can separate samples from different classes without errors.

Here, kernel mapping is integrated into our approach so that, in the high dimensional kernel feature space, it is easy to learn a mapping that can well convert training samples into their class labels. Namely, the linear transformation matrix obtained in the high dimensional feature space can more appropriately map training samples into their class labels. Therefore, our kernel based approach can perform classification well.

If the two class samples are not linearly separable, CLSR and DLSR could not attain a good classification performance. KNDLR firstly makes a nonlinear mapping of the data to enhance the linear separability of samples; hence KNDLR is able to obtain higher classification accuracy than CLSR and DLSR. Moreover, our KNDLR just utilizes the kernel function to calculate transform matrix and class label for test samples instead of directly calculating .

In addition, the overall complexity of KNDLR is low, although it is solved iteratively. In each iteration, the main computation cost is in (25), where we need to calculate the matrix inverse . Since is only dependent on and the utilized kernel function, it can be precalculated before the loop is carried out. Thus the speed of calculating in (25) is very fast. Moreover, it is obvious that is an matrix ( is the number of training samples), while calculated in CLSR or DLSR is an matrix ( is the number of features). Thus, when the number of samples is much less than the dimension of features, the size of is small. Hence it is easy to calculate the matrix inverse . If the features are very high dimensional, calculation of the matrix inverse will be quite time-consuming and memory-consuming. In particular, although our KNDLR approach is similar to CLSR and DLSR in some aspects, it is much more efficient than them when classifying high dimensional data. However, when the number of samples is not much smaller than that of features and dimension of features is high, size of is large. Hence calculating the matrix inverse is complex as solving the inverse matrix and the efficient of our KNDLR is almost the same as that of CLSR and DLSR.

5. Experiments

In our experiments, KNDLR was compared with CLSR, DLSR, NDLR (the KNDLR without kernel trick), kernel support vector machine (-SVM) in [31], nearest neighbor method (KNN), the nonnegative least squares method (NNLS) proposed in [3], sparse representation based classification (SRC (l1_ls)), and linear regression based classification (LRC). We use five face image databases and a handwriting digit dataset, namely, Georgia Tech (GT), FERET, LFW, AR, YaleB, and MNIST dataset. The subsets of the last two datasets, which are available at “http://www.cad.zju.edu.cn/home/dengcai/Data/data.html,” were used to perform our experiments. All methods were directly performed on image, with no extracting feature from image in advance. Our method, CLSR, DLSR, and NDLR all have a parameter . The parameter was set to 0.0001, 0.0005, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, and 0.5, respectively. The best accuracy of each method is given for comparison. Threshold is set to 0.0001. For KNDLR, we used polynomial kernel on LFW, AR, YaleB, and MNIST and Gaussian radial basis function (RBF) kernel on GT and FERET, respectively. The parameters of polynomial kernels and were set to 1 and 2, respectively. The parameter of RBF kernel was set to the median value of , , where is the mean of all training samples. For -SVM, package libsvm-mat-3.0-1 is used. The libsvm_options of the function “svmtrain” were set to [“ 0 2 1.0”], where “ 2” indicates Gaussian radial basis function (RBF) kernel. The value of hyperparameter was selected from the candidate set by cross-validation approach. For KNN, was set to 1, and Euclidean distance metric was used to find the nearest neighbor.

5.1. Experiment on the GT Database

The Georgia Tech (GT) face database contains 750 images from 50 subjects. For each subject 15 face images are available. The pictures show frontal and/or tilted faces with different facial expressions, lighting conditions, and scales. Figure 1 presents some face images from the GT face database. In our experiments, all images in the database were manually cropped and resized to 30 × 40. After the image cropping, most of the complex background has been excluded. They are further converted to gray level images for both training and testing purposes.

In our experiments, we randomly took face images of each subject as original training samples, respectively, and treated the remaining face images as testing samples. For each given , we take the average value of classification rates calculated from 10 random splits as final classification rate. The experimental results are presented in Table 1. From this table, we can conclude that the proposed method obtains the best classification accuracy.

5.2. Experiment on the FERET Face Dataset

A subset of the FERET face dataset was used in the experiment. This subset includes 1442 face images from 206 subjects and each subject has seven different face images. This subset was composed of the images in the original FERET face dataset whose names are marked with two-character strings: “ba,” “bj,” “bk,” “be,” “bf,” “bd,” and “bg”. Figure 2 shows some image examples. We resized all face images to 40 by 40 matrices.

In our experiments, samples of each subject were randomly taken as training samples and the remaining samples were treated as test samples. For each given , we take the average value of classification rates calculated from 10 random splits as final classification rate. Experimental results of classification accuracies are shown in Table 2. Table 2 demonstrates that our method performs better than the other methods.

5.3. Experiment on the LFW Face Dataset

The LFW dataset is a face image dataset for unconstrained face recognition. Images in this dataset vary much in clothing, pose, and background more than the other face datasets. There are more than 13000 faces images collected from the web. Every face image is manually labeled. We use only a subset composed of 1251 images from 86 subjects to conduct experiments. Figure 3 shows some example face images. Each image is cropped and resized to 32 × 32 image.

A random subset with images per individual was taken with labels to form the training set, and the rest of the database was considered to be the testing set. For each given , there are 10 random splits. The average value of classification rates calculated from 10 random splits was taken as final classification rate. The classification accuracies were shown in Table 3. It is clear that our method performs better than the rest of methods.

5.4. Experiment on the AR Face Dataset

The AR dataset contains over 4000 face images of 126 subjects, including frontal views of faces with different facial expressions, lighting conditions, and occlusion. We use only a subset composed of 3120 images from 120 subjects and each subject has 26 different face images. Figure 4 shows some example face images. Each image is cropped and resized to 40 × 50 image.

A random subset with images per individual was taken with labels to form the training set, and the rest of the database was considered to be the testing set. For each given , there are 10 random splits. The average classification rate calculated from 10 random splits was taken as final classification rate. The classification accuracies were shown in Table 4. It is clear that our method performs better than the rest of methods.

5.5. Experiment on the YaleB Face Dataset

For this database, we simply use the cropped images and resize them to pixels to conduct experiments. Figure 5 shows some example face images.

A random subset with images per individual was taken with labels to form the training set, and the rest of the database was considered to be the testing set. For each given , there are 10 random splits. The average classification rate calculated from 10 random splits was taken as final classification rate. The classification accuracies were shown in Table 5. It is clear that our method performs better than the rest of methods, except for SRC. However, SRC is time-consuming, which is shown in Section 5.7.

5.6. Experiment on the MNIST Dataset

The MNIST database of handwritten digits from Yann LeCun's page has a training set of 60,000 examples and a test set of 10,000 examples. We use only a subset composed of the first 2k training images and first 2k test images to conduct experiments. The size of each image is pixels, with 256 gray levels per pixel. Thus, each image is represented by a 784-dimensional vector. Figure 6 shows some example images. Experimental results of classification accuracies are shown in Table 6. From this table, we can conclude that the proposed method obtains the best classification accuracy.

5.7. Computing Time

Aforementioned experiments were performed on an Intel machine (Core (TM) i5-6600 CPU, 3.30 GHz, 8 GB RAM, with 64-bit Win 10 Chinese operating system). All methods, except for the SVM methods, were implemented by software MATLAB 2010a. The libSVM3.0 toolbox in the language C was utilized for performing SVM. Besides classification accuracies, because the computing time is significantly different for each method, we select the experiment on GT and AR to show the computing time of each method. The GT database only contains a small number of samples, while the AR database contains a relatively large number of samples, which represent two different cases. Here, the computing time of each method is the sum of time spent on learning from samples and time spent on classification of new samples when training samples and test samples have been given. We use MATLAB instruction tic and toc to get the time. Table 7 shows the computing time of the methods on GT and Table 8 shows that on AR.

First, it can be clearly seen that our KNDLR approaches are very fast as DLSR, CLSR, NDLR, SVM, and KNN on GT having a small number of samples and AR having relatively large number of samples. Second, KNDLR is much faster than NNLS and SRC, especially on AR. Third, it is shown that the computing time of KNDLR on AR is only a little longer than that on GT, while the computing time of some methods on AR, such as NNLS and SRC, is far longer than that on GT. In particular, SRC becomes very time-consuming when the number of samples is large. One of the reasons for the efficiency of our methods is that, in our approach, the procedure of learning is executed only once, then the results are saved for classifying all new samples. SRC needs to learn a linear combination of all training samples for every new sample; thus when the number of samples is large, SRC becomes extremely time-consuming. This demonstrates that our KNDLR is efficient.

5.8. Parameter λ and Convergence

In order to further illustrate the properties of KNDLR, the classification accuracies corresponding to different values of and convergence are shown in Figures 7 and 8, respectively, where represents that the first samples were utilized for training and the remaining for testing. KNDLR, DLSR, and CLSR are similar to each other to some degree. All of them apply the least squares regression and have a regularization parameter . In Figure 7, it is shown that KNDLR is relatively more robust to than DLSR and CLSR. Especially, for GT, FERET, AR, and MNIST, the classification accuracies obtained by utilizing KNDLR vary in a small range. It is also observed that a relative large value of cannot bring more preferable classification accuracy and could be limited to []. In real application, the cross-validation method is utilized to determine the optimal value of from this range. More importantly, in Figure 8, it is shown that KNDLR converges very fast on six datasets, especially on FERET database.

6. Conclusions

This paper proposed a kernel negative dragging linear regression method for pattern classification, which simultaneously integrated the negative dragging technique and the kernel method into linear regression for robust pattern classification under the condition that the consistency and compatibility between the test samples and training samples are poor. The negative dragging technique learns a classifier with a proper margin from noised and deformable data. Meanwhile, the kernel approach can make linearly nonseparable samples become linearly separable. Based on effect of the negative dragging technique and kernel collaboration, our method can better perform classification for noised and deformable data. Comprehensive experiments on six different datasets demonstrate that proposed KNDLR outperforms existing LR method for classification and some other commonly used methods such as SVM, NNLS, SRC, and LRC, and our KNDLR is efficient.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (nos. 61672333, 61402274, and 41471280), the Program of Key Science and Technology Innovation Team in Shaanxi Province (no. 2014KTC-18), the Key Science and Technology Program of Shaanxi Province, China (no. 2016GY-081), the Fundamental Research Funds for the Central Universities (no. 2017CSY024), the Industry University Cooperative Education Project of Higher Education Department of the Ministry of Education (no. 201701023062), and the Interdisciplinary Incubation Project of Learning Science of Shaanxi Normal University.