Research Article  Open Access
TaiXiang Jiang, TingZhu Huang, XiLe Zhao, TianHui Ma, "PatchBased Principal Component Analysis for Face Recognition", Computational Intelligence and Neuroscience, vol. 2017, Article ID 5317850, 9 pages, 2017. https://doi.org/10.1155/2017/5317850
PatchBased Principal Component Analysis for Face Recognition
Abstract
We have proposed a patchbased principal component analysis (PCA) method to deal with face recognition. Many PCAbased methods for face recognition utilize the correlation between pixels, columns, or rows. But the local spatial information is not utilized or not fully utilized in these methods. We believe that patches are more meaningful basic units for face recognition than pixels, columns, or rows, since faces are discerned by patches containing eyes and noses. To calculate the correlation between patches, face images are divided into patches and then these patches are converted to column vectors which would be combined into a new “image matrix.” By replacing the images with the new “image matrix” in the twodimensional PCA framework, we directly calculate the correlation of the divided patches by computing the total scatter. By optimizing the total scatter of the projected samples, we obtain the projection matrix for feature extraction. Finally, we use the nearest neighbor classifier. Extensive experiments on the ORL and FERET face database are reported to illustrate the performance of the patchbased PCA. Our method promotes the accuracy compared to onedimensional PCA, twodimensional PCA, and twodirectional twodimensional PCA.
1. Introduction
The principal component analysis, one of the most popular multivariate statistical techniques [1], has been widely used in the areas of pattern recognition and signal processing [2]. It is a statistical method under the broad title of factor analysis [3]. The modern instantiation PCA was formalized by Hotelling [1, 4] who also coined the term principal component, but in fact we can trace its origin back to [5] or even Cauchy [6]. PCA analyzes the observed data which is usually described by several dependent and intercorrelated variables. Its goal is to extract the important information from the data and to express this information as a set of new orthogonal variables called principal components.
There are numerous PCAbased methods for face recognition, from onedimensional PCA [7] to twodirectional twodimensional PCA known as (2D)^{2}PCA [8]. All these methods rely on two points. Firstly, the pattern of similarity of the observations and the variables can be represented as points in maps by PCA [2, 9, 10]. Secondly, the similarity of face images can be in some sense “calculated” by evaluating the distance of these points.
The main idea of onedimensional PCA method for face recognition is eigenspace projection. A projection matrix is obtained by maximum the image covariance, which shows the correlation between pixels in each training data (or say labeled face image). The next step is projecting the 1D vectors (previously constructed from 2D images) into the feature space [11]. In addition, the eigenvectors corresponding to large eigenvalues (or say the principle components), which would resemble a human face after transforming into matrix of the same size of the original face image, are called eigenface. Then the nearest neighbor (NN) classifier is adopted by computing the distance in the eigenspace to verify the identity of unlabeled face images. For instance, we would be sure that the face belongs to the 1st individual, if an unlabeled face image is nearest to one of the 1st individual’s labeled face images in the eigenspace. However transforming 2D images into 1D vectors always leads to a very highdimensional space, in which the calculating of the covariance matrix, which shows the correlation of pixels, is difficult. The size of the covariance matrix achieves , if the size of face images is . Hence, it would consume a lot of time to evaluate the eigenvectors of a such large size covariance matrix.
Twodimensional principal component analysis (2DPCA) [12], as opposed to eigenface, projects face images into a subfeature space directly without imagetovector conversion. This direct projection not only enables the preservation of partial image spatial information but also reduces computational burden [13]. The socalled image covariance matrix of 2DCPA, which is constructed directly using the original face image matrixes, is much smaller than the covariance matrix of eigenface method. In 2DPCA, the image covariance (scatter) matrix, which is somehow the same as the covariance matrix in the eigenface, shows the correlation of each column of each image. Motivated by 2DPCA, (2D)^{2}PCA [8] calculate the correlation from two directions of both of the columns and rows. 2DPCA and (2D)^{2}PCA have achieved good results in face recognition. However these methods fail to fully explore the local spatial information.
In order to further explore the local spatial information, let us take a look at the track of existing methods. Eigenface method only calculates the correlation of pixels, while the 2DPCA only calculates the correlation of columns. And (2D)^{2}PCA calculates the correlation of both columns and rows in the same time. Accuracy is promoted from onedimensional PCA to (2D)^{2}PCA, when the basic unit is changing from pixels to both columns and rows. Then what is the best basic unit if this evolution continues? We believe that patch is the most meaningful basic unit for these linear classification methods (e.g., people is discerned by eyes and nose). The local spatial information of eyes and nose is contained in the patches. So it is more intuitive to consider the correlation of different patches. From another aspect, patch is successfully used in the field of image processing recently, not only face recognition [14–16] but also image denoising [17–19], image superresolution [20, 21], and image decomposition (cartoontexture [22, 23] or illuminationreflectance [24] and further retinex image enhancement [25]). Patch is becoming a basic tool in these abovementioned literatures. Motivated by our idea that the patch is the most meaningful basic unit for these linear classification methods and the widely successful application of patch, we intend to calculate the correlation of the patches in the computation of our PCA.
For the purpose of calculating the correlation of the patches, we simply add patch preprocessing before the frame work of 2DPCA. That is, we first divide the face images into patches and then we convert these patches into columns. The columns, in 2DPCA frame work, are substituted by our patchunfoldcolumns, so the correlation between columns in the 2DPCA becomes the correlation between patches after calculating the image covariance (scatter) matrix. Then the orthonormal eigenvectors of the image covariance (scatter) matrix can be the optimal projection axes which are used for feature extraction. The optimal projection axes are used to form a matrix, which is called the feature matrix or feature image of the training images [12]. The test images are projected on this projection matrix and then classified by finding out the nearest neighbor of the projections of the test images. We call this method patchbased principal component analysis (PPCA). As a result, the main contribution of the proposed method is that the most meaningful basic unit patch is incorporated in the frame work of 2DPCA, so that the correlation between the most meaningful basic units is utilized to promote the accuracy rate. This is confirmed our experiments. Besides, the proposed method can be easily implemented.
In fact, we can choose the support vector machine (SVM) as classifier and this may improve the accuracy rate. But SVM is not necessary for the comparison among our method and eigenface method, 2DPCA, and (2D)^{2}PCA. In another aspect, we know that PCA is one of global techniques [26], so that it is difficult to utilize both the local spatial correlation between pixels in each patch and the nonlocal spatial correlation between patches as [17]. But we consider that the global computation could somehow compensate the utilization of the nonlocal spatial correlation between patches.
It is noteworthy that there has been great progress of face recognition nowadays. It is very hard for an improved version of an old method to challenge the recent deep learning [27, 28] based methods. Please refer to [29] for a more extensive overview on face recognition. However the improvement of an old method is still meaningful, since that many old methods are being widely employed, e.g., the alternating direction method of multipliers (ADMM) [30–35] and block coordinate decent (BCD) algorithm [36]. Meanwhile, what we focus on is the improvement of the PCAbased classification method. Moreover, the experimental results in Section 3 have indeed validated that our method outperforms other PCAbased methods.
The outline of this paper is given as follows. In Section 2, we present our PPCA method for face recognition. In Section 3, experimental results are reported to demonstrate the performance of the proposed method. Finally, some conclusions are drawn in Section 4.
2. PatchBased Principal Component Analysis
In 2DPCA, an image matrix of size is directly projected on dimensional unitary column vectors: . By maximizing the total scatter , we obtain the projection matrix. Then the following steps are feature extraction and classification. Our PPCA just adds a patch preprocessing prior to this frame work above. Then, same as 2DPCA, we calculate the image covariance matrix and optimal projection axes for feature extraction and classification.
2.1. Patch Preprocessor
Suppose that we have training facial images. For the th training sample, we divide the image of size into patches of size (, ). If (or ) is not divisible by (or ), we would add overlap (or ), so that (or ) would always be integer no matter the choice of (or ). Generally, for the sake of reducing computational burden, we choose the smallest one of the overlaps for each selected (or ). Then we can get the number of patches of every face image:or with the overlap ()
Then we convert each patch into a column vector of size :More details about the patchtovector conversion are given in the Section 3. Then let represent all of the reshaped vectors of the th training facial imagewhere the size of is , and .
It should be noted that we adopt the 2DPCA framework rather than (2D)^{2}PCA. As mentioned before, (2D)^{2}PCA takes both the correlations of columns and rows into consideration, while the 2DPCA method concentrates on the correlations between column vectors. Meanwhile our patch preprocessing convert patches into vectors. Therein, it is reasonable to adopt the 2DPCA framework rather than (2D)^{2}PCA.
2.2. Total Scatter
Let be a matrix with orthonormal columns, . Then we project matrix of size onto by the following linear transformation [37, 38]: is an dimensional projected vector (i.e., the projected feature vector [12]) of matrix . Same as 2DPCA, we use the total scatter of the projected samples to measure the discriminatory power of the projection matrix : Let us definewhich is called the image covariance (scatter matrix). The average matrix of all the preprocessed images isThen can be evaluated byIt is easy to verify that is a semipositive matrix. We can evaluate directly using the training samples. The total scatter of the projected samples can be expressed bywhere is a unitary column vector. This is called generalized total scatter criterion [12]. The unitary vector is called the optimal projection axis that maximizes the criterion.
2.3. Optimization
It has been proved that the optimal projection axis , which maximizes the total scatter of the projected samples, is the eigenvectors of corresponding to the largest eigenvalues [38]. In general, we choose the orthonormal eigenvectors of corresponding to the first largest eigenvalues. They are equivalent toThe first eigenvector is required to have the largest possible variance (i.e., this component will “explain” or “extract” the largest part of the pattern information of the preprocessed face images [1]). We can simply control the value of by a threshold as follows [8]: where () are the first largest eigenvalues. We can determine by presetting or even referring to the results from different face database.
2.4. Feature Extraction and Classification
For each patchpreprocessed facial image in training set , letwhere of size is the projection matrix. We call of size the patchbased feature matrix and () the patchbased principal components () of the th sample image.
After patch preprocessing and 2DPCA projection, facial images in the training set have been transformed into the patchbased feature matrixes. We use the nearest neighbor (NN) classifier [39] for classification. We define the distance between two arbitrary patchbased feature matrixes bywhere denotes the Euclidean distance.
We have training facial images, each of which is assigned a given identity. Given a test facial image, we first do a patch preprocessing and obtain a preprocessed matrix . Then we project onto and obtain . Ifwhere is a preset thresholding, the test image results to the same kind of , that is, the test facial image and the th training image, belongs to the same person. Otherwise, if , the test sample does not belong to any identity in this training data.
3. Experimental Results
In this section, the performance among our proposed PPCA and the eigenface method (or say the 1DPCA method), the 2DPCA method, and the (2D)^{2}PCA method is evaluated on two wellknown face image databases (ORL and FERET). To our point of view, experiments on constrained face databases are sufficient to validate the superiority of the proposed method among these methods. Thus, unconstrained face databases, for example, LFW database, are not taken into consideration.
First, the recognition accuracies of these four methods are compared with the experimental strategy that use half of the images in the database for training. After that, more experimental results show the influence from reordering and the size of patches. All experiments are performed using Matlab (R2014a) on a desktop with 3.40 GHz Intel core i72600 CPU and 12 GB RAM equipped with Windows 7 OS. If not specified, the preset threshold , which controls the number of projection vectors, is set to 0.90 in the latter experiments. That is, we extract 90 percentage energy of the whole training images.
3.1. Recognition Accuracy Results on the FERET Database
The FERET database [40, 41] is a standard dataset used for facial recognition system evaluation. The Face Recognition Technology (FERET) program is managed by the Defense Advanced Research Projects Agency (DARPA) and the National Institute of Standards and Technology (NIST). Until 2003, there are 2,413 facial images representing 856 individuals in the FERET database. The performance of the above 4 methods are tested on the partial FERET face database, which contains 400 images (with the cropped size ) from 200 individuals, each providing 2 different images. The socalled fa subset, which contains 100 images, is used as training data, while the socalled fb subset, containing remaining 100 images, is used as testing data. Figure 1 shows 2 images of one individual in the ORL database.
(a)
(b)
From Table 1, we can see that the PPCA method achieves the highest accuracy on the FERET database. To get the highest accuracy, parameter is set referring to the results. The recognition accuracy is improved from 84.0 percentage of 2DPCA and 83.0 percentage of (2D)^{2}PCA to 86 percentage. It means that the PPCA method recognized 2 more images than 2DPCA and 3 more images than (2D)^{2}PCA on the FERET database. We remarked here that images of cropped size were used in [8] and 83%, 84.5%, and 85% accuracy rates were got, respectively, by 1DPCA, 2DPCA, and (2D)^{2}PCA.

3.2. Recognition Accuracy Results on the ORL Database
The ORL database contains images from 40 individuals, each providing 10 different images with the size (http://rduin.nl/prhtml/prdatafiles/orl.html). Figure 2 gives 10 images of 1 individual in the ORL database. As previously mentioned, the first five images of each individual are used as training data, and the remaining five images are used as testing data.
Table 2 gives the results on the comparisons of the four methods on recognition accuracy. Both 2DPCA and (2D)^{2}PCA reach 90.5% accuracy, which is higher than eigenface method. Our method achieves the highest accuracy on this database. The recognition accuracy is improved from 90.5 percentage to 91.0 percentage with four different sizes of the patch. That is, the PPCA method could recognize 1 more face image than 2DPCA and (2D)^{2}PCA on the ORL database. The CPU time of the PPCA method is not desirable but less serious in its consequences.

3.3. Influence of Reordering the Patch
The patchtovector conversion has a significant impact on the performance of our method. Our initial patch preprocessor converts a patch into a column vector by directly concatenating small columns in the patch. This indeed increased the recognition accuracy that our method achieves 91.0% recognition accuracy with four different sizes of patch. However this improvement does not satisfy us. Employing the idea of clustering, we convert a patch into a column vector by reordering pixels by values for the sake of placing the approximative values together. The concatenating strategy is compared with reordering strategy by contrasting the results of recognition accuracy and CPU time on five different sizes of patch in Table 3.

Table 3 shows that the reordering strategy achieves better performances on recognition accuracy than concatenating strategy. Although reordering strategy implies an additional step of ranking the values in order, its CPU time is not always more than concatenating strategy. We further analyze the eigenvalues of the image covariance (scatter matrix) , which is defined in (9). The patch size is selected and the comparison is conducted on the ORL database. Here, the size of image covariance matrix was , so it was very easy to calculate its eigenvalues. In Figure 3, the magnitude (eigenvalues divided by the sum of eigenvalues) of the eigenvalues by these two strategy is plotted in decreasing order.
As depicted in Figure 3, the magnitude of the eigenvalues with strategy 2 decreases faster than that with strategy 1. That is, the first small number of eigenvalues by reordering strategy is larger than the same number of eigenvalues by concatenating strategy. This implies that the energy of a patchpreprocessed facial image is concentrated on its first small number of component vectors. Therefore, it is reasonable to use these component vectors for recognition purposes [12]. In addition, the more concentrated the energy on the first small number of eigenvalues is, the smaller the value of in (11) would be. The smaller value of brings less computation complexity and less CPU time, which is exactly consistent with the results of CPU time in Table 3.
We remarked here that though reordering strategy achieves higher accuracy on the ORL database, we have to admit that it may not be stable. Reordering strategy does not achieve higher accuracy than concatenating strategy on the FERET database. This potential instability may be attributed to the patchtovector procedure, which might more or less lose the structural information. Hence, in our further work, we will attempt to find better ways to preserve more local spatial structural information rather than better strategies for patchtovector conversion.
3.4. Influence of the Patch Size
The PPCA method can somehow be considered as a generalization of 2DPCA method. 2DPCA method is a particular case of the PPCA method when the patch size is . When the size of patch is or , the PPCA method resembles onedimensional PCA. Table 4 illustrates that the choice of patch size affects the performance of our method both on the recognition accuracy and on CPU time. A bad choice of patch size might generate a negative result. We would better not choose patches with too large or too small sizes. Therefore, it is important but not easy to choose a patch size with high recognition accuracy. Besides, the computation complexity is so large when the patches are highly overlapped (e.g., patch size with overlaps 0 and 18). That is, our method would take much more time than 1DPCA, 2DPCA,and (2D)^{2}PCA, if the patches are highly overlapped. Hence, it is better to choose patches of moderate sizes with small overlaps.

With the further analysis of the results, we find that there is a difference among the results who are identified of different patches. Our early experiment shows that 2DPCA and (2D)^{2}PCA both get the same results of identified people. We can conclude that their capability of identification is same. Thus, the results of our method are compared with results of 2DPCA. The results, with respect to 2DPCA, on the ORL database from two strategies are shown, respectively, in Tables 5 and 6. The item named “number of more identified images” refers to the number of facial images in the testing set which our method correctly recognizes but 2DPCA fails to identify. The item named “number of images failed to be identified” refers to the number of facial images in the testing set which our method does not identify but 2DPCA recognizes.


From Table 4, we can find that the 195th face image in the test set is not recognized by our method of patch size , while the 42nd and 133rd are recognized in contrast with results given by 2DPCA and (2D)^{2}PCA. And the 200th face image in the test set is recognized with the patch size of . The 69th and 133rd face images are recognized and the 198th failed to be identified with the patch size of . The 42nd and 200th face images are recognized and the 152nd is not identified with the patch size of . Table 5 gives the comparison of our method, with different size of patches and “sorting” strategy. For the sake of simplicity, the details would no longer be listed.
We can see from Table 4 that different sizes of patches bring different identifying results though they achieve the same accuracy. As observed from Table 5, the PPCA method with similar size of patches performs approximately the same. For instance, our method with patch sizes and recognizes the 69th image in the ORL database, while patch sizes , , and do not contribute to the recognition of the 69th image.
These differences in the results between our PPCA and 2DPCA (and (2D)^{2}PCA), differences among our method from different size of patches, and similarities in results from similar size of patches reveal that the capability of extracting different features would differ on account of the choice of patch size. This indeed validates our belief that “patch is the meaningful basic unit for classification (e.g., people is discerned by eye and nose),” with being aware that eye or nose and so forth are of different sizes.
4. Conclusions
We have presented a patchbased PCA method to deal with face recognition. By simply doing a patch preprocessing, before the computation of projection matrix of 2DPCA, we can directly calculate the correlation of the patches instead of the rows or columns of face images. Comparisons of recognition accuracy are made with the 1DPCA [7], 2DPCA [12], and (2D)^{2}PCA [8] methods on the ORL face database and the FERET database. Numerical experiments are represented to illustrate that the use of patch promotes the accuracy compared to former 1DPCA, 2DPCA, and (2D)^{2}PCA. Meanwhile, the results demonstrate our belief that patch is the most meaningful basic unit for classification.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This research is supported by 973 Program (2013CB329404), the National Science Foundation of China (61370147, 61402082), and the Fundamental Research Funds for the Central Universities (ZYGX2016J132). Portions of the research in this paper use the FERET database of facial images collected under the FERET program, sponsored by the DOD Counterdrug Technology Development Program Office. The authors would like to express their great gratitude to AT&T Laboratories Cambridge for using the face images in the ORL database.
References
 H. Abdi and L. J. Williams, “Principal component analysis,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 2, no. 4, pp. 433–459, 2010. View at: Publisher Site  Google Scholar
 I. T. Jolliffe, Principal Component Analysis, Wiley Online Library, New York, NY, USA, 2002. View at: MathSciNet
 K. Kim, “Face recognition using principle component analysis,” in Proceedings of the In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 586–591, 1996. View at: Google Scholar
 H. Hotelling, “Analysis of a complex of statistical variables into principal components,” Journal of Educational Psychology, vol. 24, no. 6, pp. 417–441, 1933. View at: Publisher Site  Google Scholar
 K. Pearson, “On lines and planes of closest fit to systems of points in space,” Philosophical Magazine, vol. 2, no. 6, pp. 559–572, 1901. View at: Publisher Site  Google Scholar
 I. GrattanGuinness, “The search for mathematical roots, 1870–1940,” in Logics, Set Theories and The Foundations of Mathematics from Cantor through Russell to Gode, Princeton University Press, Princeton, NJ, USA, 2011. View at: Google Scholar  MathSciNet
 M. A. Turk and A. P. Pentland, “Face recognition using eigenfaces,” in Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition, pp. 586–591, IEEE, Maui, HI, USA, 1991. View at: Publisher Site  Google Scholar
 D. Zhang and Z. Zhou, “(2D)2 PCA: twodirectional twodimensional PCA for efficient face representation and recognition,” Neurocomputing, vol. 69, no. 1–3, pp. 224–231, 2005. View at: Publisher Site  Google Scholar
 J. Edward Jackson, A Users Guide to Principal Components, John Wiley & Sons, Hoboken, NJ, USA, 2005.
 G. Saporta and N. Niang, “Principal component analysis: application to statistical process control,” Data Analysis, pp. 1–23, 1996. View at: Google Scholar  MathSciNet
 M. A. Turk and A. P. Pentland, “Recognition in face space,” in Proceedings of the Intelligent Robots and Computer Vision IX: Algorithm and Techniques, pp. 43–54, SPIE, Boston, Mass, USA, 1990. View at: Publisher Site  Google Scholar
 J. Yang, D. Zhang, A. F. Frangi et al., “Twodimensional PCA: a new approach to apperence based face representation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 131–137, 2004. View at: Publisher Site  Google Scholar
 Y. Sun, X. Tao, Y. Li, and J. Lu, “Robust 2D principal component analysis: a structured sparsity regularized approach,” IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2515–2526, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 C. Hu, J. Harguess, and J. K. Aggarwal, “Patchbased face recognition from video,” in Proceedings of the International Conference on Image Processing, ICIP '09, pp. 3321–3324, IEEE, Cairo, Egypt, November 2009. View at: Publisher Site  Google Scholar
 C. Jung, L. Jiao, B. Liu, and M. Gong, “Positionpatch based face hallucination using convex optimization,” IEEE Signal Processing Letters, vol. 18, no. 6, pp. 367–370, 2011. View at: Publisher Site  Google Scholar
 Y. Wong, S. Chen, S. Mau, C. Sanderson, and B. C. Lovell, “Patchbased probabilistic image quality assessment for face selection and improved videobased face recognition,” in Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2011, IEEE, Colorado Springs, CO, USA, June 2011. View at: Publisher Site  Google Scholar
 K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transformdomain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Chen, T.Z. Huang, L.J. Deng, X.L. Zhao, and M. Wang, “Group sparsity based regularization model for remote sensing image stripe noise removal,” Neurocomputing, 2017. View at: Publisher Site  Google Scholar
 J.J. Mei, Y. Dong, T.Z. Huang, and W. Yin, “Cauchy noise removal by nonconvex admm with convergence guarantees,” Journal of Scientific Computing, 2017. View at: Publisher Site  Google Scholar
 L.J. Deng, W. Guo, and T.Z. Huang, “Single image superresolution by approximated Heaviside functions,” Information Sciences, vol. 348, pp. 107–123, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 L.J. Deng, W. Guo, and T.Z. Huang, “Single image superresolution via an iterative reproducing kernel Hilbert space method,” IEEE Transactions on Circuits and Systems for Video Technology, vol. PP, no. 99, 2015. View at: Publisher Site  Google Scholar
 H. Schaeffer and S. Osher, “A low patchrank interpretation of texture,” SIAM Journal on Imaging Sciences, vol. 6, no. 1, pp. 226–262, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Fan, T. Huang, T. Ma, and X. Zhao, “Cartoontexture image decomposition via nonconvex lowrank texture regularization,” Journal of the Franklin Institute, vol. 354, no. 7, pp. 3170–3187, 2017. View at: Publisher Site  Google Scholar
 W. Wang and M. K. Ng, “A nonlocal total variation model for image decomposition: illumination and reflectance,” Numerical Mathematics. Theory, Methods and Applications, vol. 7, no. 3, pp. 334–355, 2014. View at: Google Scholar  MathSciNet
 H. Chang, M. K. Ng, W. Wang, and T. Zeng, “Retinex image enhancement via a learned dictionary,” Optical Engineering, vol. 54, no. 1, Article ID 140914, 2015. View at: Publisher Site  Google Scholar
 L. J. P. van der Maaten, E. O. Postma, and J. H. van den Herik, “Dimensionality reduction: a comparative review,” Journal of Machine Learning Research, vol. 10, no. 40, pp. 66–71, 2009. View at: Google Scholar
 Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: closing the gap to humanlevel performance in face verification,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR '14), pp. 1701–1708, IEEE, Columbus, Ohio, USA, June 2014. View at: Publisher Site  Google Scholar
 F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: a unified embedding for face recognition and clustering,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR '15), pp. 815–823, IEEE, Boston, Mass, USA, June 2015. View at: Publisher Site  Google Scholar
 M. P. Beham and S. M. M. Roomi, “A review of face recognition methods,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 27, no. 4, Article ID 1356005, 2013. View at: Publisher Site  Google Scholar
 S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2010. View at: Publisher Site  Google Scholar
 X.L. Zhao, F. Wang, T.Z. Huang, M. K. Ng, and R. J. Plemmons, “Deblurring and sparse unmixing for hyperspectral images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 7, pp. 4045–4058, 2013. View at: Publisher Site  Google Scholar
 J. Liu, T.Z. Huang, I. W. Selesnick, X.G. Lv, and P.Y. Chen, “Image restoration using total variation with overlapping group sparsity,” Information Sciences, vol. 295, pp. 232–246, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 T.Y. Ji, T.Z. Huang, X.L. Zhao, T.H. Ma, and L.J. Deng, “A nonconvex tensor rank approximation for tensor completion,” Applied Mathematical Modelling, vol. 48, pp. 410–422, 2017. View at: Publisher Site  Google Scholar  MathSciNet
 T.X. Jiang, T.Z. Huang, X.L. Zhao, L.J. Deng, and Y. Wang, “A novel tensorbased video rain streaks removal approach via utilizing discriminatively intrinsic priors,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR '17), IEEE, Honolulu, Hawaii, USA, 2017. View at: Google Scholar
 T.H. Ma, T.Z. Huang, X.L. Zhao, and Y. Lou, “Image deblurring with an inaccurate blur kernel using a groupbased lowrank image prior,” Information Sciences, vol. 408, pp. 213–233, 2017. View at: Publisher Site  Google Scholar
 T.Y. Ji, T.Z. Huang, X.L. Zhao, T.H. Ma, and G. Liu, “Tensor completion using total variation and lowrank matrix factorization,” Information Sciences, vol. 326, pp. 243–257, 2016. View at: Publisher Site  Google Scholar  MathSciNet
 K. Liu, Y.Q. Cheng, and J.Y. Yang, “Algebraic feature extraction for image recognition based on an optimal discriminant criterion,” Pattern Recognition, vol. 26, no. 6, pp. 903–911, 1993. View at: Publisher Site  Google Scholar
 J. Yang and J.Y. Yang, “From image vector to matrix: a straightforward image projection techniqueIMPCA vs. PCA,” Pattern Recognition, vol. 35, no. 9, pp. 1997–1999, 2002. View at: Publisher Site  Google Scholar
 R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, John Wiley & Sons, New York, NY, USA, 2012. View at: MathSciNet
 P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss, “The FERET database and evaluation procedure for facerecognition algorithms,” Image and Vision Computing, vol. 16, no. 5, pp. 295–306, 1998. View at: Publisher Site  Google Scholar
 P. Jonathon Phillips, H. Moon, S. A. Rizvi, and P. J. Rauss, “The FERET evaluation methodology for facerecognition algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1090–1104, 2000. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2017 TaiXiang Jiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.