Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2019, Article ID 8370835, 7 pages
https://doi.org/10.1155/2019/8370835
Research Article

More Adaptive and Updatable: An Online Sparse Learning Method for Face Recognition

1School of Technology, Beijing Forestry University, Beijing 100083, China
2Beijing Laboratory of Urban and Rural Ecological Environment, Beijing Forestry University, Beijing 100083, China
3Department of Automation, Shanghai Jiao Tong University and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240, China

Correspondence should be addressed to Yue Zhao; moc.621@9060euyoahz

Received 24 June 2019; Accepted 24 August 2019; Published 13 October 2019

Academic Editor: Sos S. Agaian

Copyright © 2019 Qiaoling Han et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In the actual face recognition applications, the sample sets are updated constantly. However, most of the face recognition models with learning strategy do not consider this fact and using a fixed training set to learn the face recognition models for once. Besides that, the testing samples are discarded after the testing process is completed. Namely, the training and testing processes are separated and the later does not give a feedback to the former for better recognition results. To attenuate these problems, this paper proposed an online sparse learning method for face recognition. It can update the salience evaluation vector in real time to construct a dynamical facial feature description model. Also, a strategy for updating the gallery set is proposed in this proposed method. Both the dynamical facial feature description model and the gallery set are employed to recognize faces. Experimental results show that the proposed method improves the face recognition accuracy, comparing with the classical learning models and other state-of-the-art face recognition methods.

1. Introduction

Face recognition is a significant gift that humans perform effortlessly and automatically in our daily lives. According to the wide applications of automatic face recognition, such as public security and human-robot interaction, it is an established field attracting great attention [13].

In 1973, Takeo Kanade presented the first face recognition system [4]. After that, there was a dormant period in automatic face recognition until the work on a low-dimensional face representation by Sirovich and Kirby, derived using the K-L transform or principal component analysis (PCA) [5]. It is the pioneering work of Turk and Pentland that proposes a new facial feature description model, Eigenface [6, 7], to recognize faces. From then on, a number of face recognition and modeling systems have been developed and deployed. Some major state-of-the-art methods of face recognition are as follows: Ahonen et al. divide a face image into some regions and then extract local binary patterns from these regions to recognize faces [8]; to enhance the robustness to noise of LBP features, Zhang et al. [9] introduce a combination approach, which employs multiorientation and multiscale Gabor filtering to extend LBP to Local Gabor Binary Pattern (LGBP); they also propose a combination of the spatial histogram and the Gabor phase information encoding scheme, the Histogram of Gabor Phase Pattern (HGPP) [10]; to reduce the dimension of LGBP and suppress the intrapersonal variation, whitened PCA technique is applied on LGBP by Nguyen et al. [11] and leads to very high recognition rate; through applying the LBP-based structure on oriented magnitudes, Patterns of Oriented Edge Magnitudes (POEM) is proposed by Vu and Caplier [12], followed by the whitened PCA to increase the discriminative power and robustness for face recognition; to generate a more robust face descriptor, Mehta et al. [13] propose a new approach for face recognition using directional and texture information from face images, which improves the recognition results significantly. Besides these single feature-based face recognition methods, recently there are some fusion feature methods for more higher face recognition accuracy: Zou et al. [14] adopt Borda count method to combine different classifiers which are trained with one type of features selected from LBP, Gabor, and Eigenface; Tan and Triggs [15] fuse LBP and Gabor features to recognize faces with the kernel discriminant common vector (DCV) method.

Classical face recognition methods mentioned above employ a training set to train the face recognition model, and a gallery set to be the prototype which the testing samples matched against during the recognition procedure. Good results have been achieved of the classical methods; however, two drawbacks are also found from these methods: (1) The fixed training sets and gallery sets could not adapt to the variation of face images during testing, especially in real applications. For example, faces will gradually change with age. (2) The testing sample is discarded after the recognition procedure, which causes great loss of information contained in the testing set. Namely, the training and testing processes are separated and the latter does not give a feedback to the former for better recognition results. In order to attenuate these problems, an online sparse learning method is proposed in this paper, which can accommodate itself to variation of the face images in the testing set or real applications. In addition, the proposed method can update the facial feature description model in real time and ensure the good sparsity of the facial features.

2. Previous Works

In our previous work [16], we proposed a sparse learning method for salient facial feature description, and a new facial feature description model was obtained, which is given bywhere is a sparse facial feature vector and is a LBP feature vector. is a salience evaluation vector for . is a local sparse feature vector of the region in the facial image.

The method for obtaining is described in detail in our previous work [16], and there are two limitations in this method: (1) the training set is fixed; for example, it does not consider the dynamic changes of the sample set. (2) is obtained by only once learning, and there is no further adjustment for it. Thus, its adaptability and self-adjustment ability need to improve. An online sparse learning method for face recognition is presented in this paper to solve these problems.

3. The Proposed Method

3.1. Online Sparse Learning

With the method in our previous work [16], the initial salience evaluation vector and facial feature description model can be obtained, and suppose they are and . Let and denote the testing set and gallery set, respectively. Now, we propose an online sparse learning method to update and .

Randomly select a sample from testing and denote it as . Suppose the label of is , which can be computed with nearest neighbor model and gallery set . Take and to denote one nearest neighbor and next-nearest neighbor of , which are obtained from the gallery set with .

Region-based features (such as LBP and POEM) can be extracted from images , , and , which, respectively, are as follows:

A feature transformation is introduced to generate two classes of distance samples. If two feature vectors are from the nearest neighbor couple, the following transformation will produce a positive sample:

Also a negative sample can be generated by applying the same transformation to the next-nearest neighbor couple; namely, the transformation will generate a negative sample . We should know that the number of the nearest neighbors and next-nearest neighbors are both at least one. All the positive and negative samples form the positive and negative sample set and , respectively. The sample matrix consists of and . Label vector is obtained by assigning label “1” and “0” to the positive and negative sample, respectively. Let denote the updated value of , then we havewhere is the increment of .

The initial value of is obtained by using the sample matrix (which is learned from the training samples and different to ) and salience evaluation vector to fit the label vector. As is the updated value of , it also have the property that it can fit the label vector cooperated with the sample matrix. If we take one testing set tn and its nearest and next-nearest neighbors as the training samples, we can use and to fit . Then, we arrive at

Plug equation (4) into equation (5), we have

A linear system is obtained by substituting with , which is given by

A visual expression in Figure 1 is afforded to better understand the facial feature transformation equation (3) and the theory for constructing the linear system.

Figure 1: Visual expression of facial feature transformation and theory for constructing the linear system.

As the number of samples in is much smaller than the dimensionality of the sample, it cannot get an ununique solution for . Here, the structured sparse representation is introduced [1721] to overcome this problem. Simultaneously, this strategy leads to the sparsity at both the group and individual levels. By dividing the sample matrix E and the salience evaluation into region, we havewhere is the column of . Then, an optimization problem is constructed to solve as follows:where is the group columns of and is the corresponding salience evaluation vector of . is the Euclidean norm. is a complexity parameter, which has a positive value and controls the amount of shrinkage; namely, the larger the value of is, the greater the amount of shrinkage could be.

When is obtained, can be computed with equation (4). Then, we use to construct the facial feature description model (in equation (1)). and can be updated with and , respectively. These steps are performed iteratively.

A summary of the online sparse learning algorithm is given in Algorithm 1.

To better understand the online sparse learning algorithm, a framework of the essential procedures of this algorithm is presented in Figure 2. Also, a detailed description of the online sparse learning algorithm is summarized in Algorithm 2. With the update of the testing sample, the online sparse learning procedures are performed continuously. More importantly, the time cost of the online sparse learning algorithm is very low, which ensures it can be performed in real time.

Figure 2: The framework of online sparse learning algorithm.
Algorithm 1: Online sparse learning algorithm.
3.2. Criterion of Label Correctness

This subsection presents one comprehensive criterion for judging the correctness of the testing sample label . As we employ the nearest neighbor model to assign a label for the testing sample, a threshold for the nearest neighbor distance is used to determine whether the label is correct. For improving the accuracy of the judgement further, the next-nearest neighbor distance is also employed to these judgement procedures. Suppose the distance between two elements of any one nearest neighbor couple is computed, which is . Also, is used to denote the distance between two elements of any one next-nearest neighbor couple . With the nearest neighbor distance and next-nearest neighbor distance, one comprehensive criterion for judging the correctness of the testing sample label is proposed as follows.

Rule 1. (criterion of label correctness). If and satisfy, the label of , is correct, where and are, respectively, two thresholds for and , which are set manually according to the experience.

3.3. Gallery Set Update Strategy

In this research, a dynamic gallery set is taken as the input of the online sparse learning algorithm. If the label of the testing sample is right, the following strategy is used to update the gallery set (Algorithm 2).

Algorithm 2: Gallery set update strategy.

4. Experiments

In this section, the performance of the proposed method is demonstrated on the FERET database [22]. We first compare the proposed method with the classical learning models: PCA [23] and WPCA [24] using LBP and POEM features. Then, a lot of state-of-the-art face recognition results obtained on this database are shown to evaluate the performance of the proposed method. Overall recognition rate (ORR) is defined as the ratio that the number of images correctly recognized to the total number of images in the whole face dataset, which is a comprehensive criterion for evaluating the performance of the face recognition method.

4.1. Data Description

FERET database is a famous database for face recognition. It has 14,051 grayscale images representing 1,199 individuals. These images contain variations in lighting, facial expressions, and time. The ways of selecting the gallery and probe sets are the same to the FERET evaluation protocol [22]. Namely, fa (1,196 images) is used as the gallery set, while fb (1,195 images), fc (194 images), dupI (722 images), and dupII (234 images) are used as the probe sets. Note that the original training set comes from two training sets, the standard FERET training set (736 images) and subfc training set (194 images) [8]. All the images are cropped to 130 × 150 pixels for LBP feature extraction and 96 × 96 for POEM feature extraction.

Extended Yale B face database consists of 38 subjects, and each subject has approximately 64 frontal view images under various lighting conditions. All image data used in the experiments are manually aligned, cropped, and then resized to 168 × 192 pixels. The database is divided into five subsets on the angle between the light direction: gallery set (38 images), S1 (225 images), S2 (456 images), S3 (455 images), S4 (526 images), and S5 (714 images). We random select 500 images for training set.

4.2. Results

The comparison results between the proposed method and other learning methods for face recognition on FERET database are shown in Tables 1 and 2. From these two tables, we can see that the proposed method achieves the highest recognition accuracy in most of the subsets (except subset fb). By considering the comprehensive criterion of ORR, the proposed method (94.6% with LBP features and 95.4% with POEM features) is significantly better than the other two methods, which indicates that the online sparse learning strategy has great advantages than PCA model and WPCA model.

Table 1: Comparison results of different learning methods with LBP features.
Table 2: Comparison results of different face recognition methods with POEM features.

Comparing the corresponding results of the same method in Tables 1 and 2, we find that the results obtained using POEM features (95.4%) are better than the results obtained using LBP features (94.6%). It indicates that POEM features are much discriminative than LBP features. The proposed methods obtained significantly better results than other two learning methods PCA and WPCA both with LBP and POEM, which shows that the proposed method has better generalization ability among different region-based features.

We also compare the performance of the proposed method with other state-of-the-art methods and summarize the results in Table 3. This table shows that the performance of the proposed method is much better than that of other state-of-the-art methods on the comprehensive criterion ORR. More importantly, the proposed method gains 100% accuracy on the subset dupII, and much higher recognition rate than other methods on the subset dupI. Because dupI and dupII were, respectively, taken within one year of gallery image and at least one year apart, which means the proposed method is much more robust to variations in time and age.

Table 3: Comparison results of different face recognition methods with POEM features.

5. Conclusions

Face recognition is the main task for person identification. This paper proposes an online sparse learning method for face recognition. The initial salience evaluation vector in a facial feature description model is obtained from our previous work. Then, an online sparse learning method is proposed to learn the increment of the salience evaluation vector with one testing sample and the current gallery set. The salience evaluation vector is updated by adding the initial salience evaluation vector to its increment. Also, a gallery set update strategy is presented in this paper, which achieves the dynamical update of the gallery set. The proposed method can update the facial feature description model in real time and ensure the good sparsity of the facial features. In addition, it considers the dynamical changing of the sample set to generalize the ability of the proposed method for good face recognition results. Experimental results show that the proposed method improves the face recognition accuracy, comparing with the methods of PCA and WPCA.

Data Availability

Previously reported FERET database, PCA learning model, and WPCA learning model were used to support this study and are available at DOI: 10.1109/CVPR.1997.609311, DOI: 10.1016/S1077-3142(03)00077-8, and DOI: 10.1016/j.patcog.2009.12.004. These prior studies and datasets are cited at relevant places within the text as references [2123].

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors’ Contributions

Jianbo Su contributed to conceptualization and data curation; Qiaoling Han was involved in methodology and wrote the original draft; and Yue Zhao reviewed and edited the manuscript.

Acknowledgments

This work was supported in part by the Fundamental Research Funds for the Central Universities (2019ZY12) and in part by the NSF of China under the grant nos. 61533012 and 91748120.

References

  1. S. Elisabetta and F. Carlo, “Design and implementation of a multi-modal biometric system for company access control,” Algorithms, vol. 10, pp. 61–71, 2017. View at Publisher · View at Google Scholar · View at Scopus
  2. C. Lu, W. Liu, X. Liu, and S. An, “Double sides 2DPCA for face recognition,” in Proceedings of the International Conference on Intelligent Computing, pp. 446–459, Springer, Shangai, China, September 2008.
  3. F. Liu, Y. Ding, F. Xu, and Q. Ye, “Learning low-rank regularized generic representation with block-sparse structure for single sample face recognition,” IEEE Access, vol. 7, pp. 30573–30587, 2019. View at Publisher · View at Google Scholar
  4. T. Kanade, “Picture processing system by computer complex and recognition of human faces,” Kyoto University, Kyoto, Japan, 1973, Ph.D. dissertation. View at Google Scholar
  5. L. Sirovich and M. Kirby, “Low-dimensional procedure for the characterization of human faces,” Journal of the Optical Society of America A, vol. 4, no. 3, pp. 519–524, 1987. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Turk and A. Pentland, “Face recognition using eigenfaces,” in Proceedings of the IEEE International Cofference on Computer Vision and Pattern Recognition (CVPR), pp. 586–591, IEEE, Columbus, OH, USA, 1991.
  7. M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991. View at Publisher · View at Google Scholar · View at Scopus
  8. T. Ahonen, A. Hadid, and M. Pietikainen, “Face description with local binary patterns: application to face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037–2041, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. W. Zhang, S. Shan, W. Gao, X. Chen, and H. Zhang, “Local gabor binary pattern histogram sequence (lgbphs): a novel non-statistical model for face representation and recognition,” in Proceedings of the IEEE International Conference on Computer Vision ICCV-95, vol. 1, pp. 786–791, IEEE, Beijing, China, October 2005.
  10. B. Zhang, S. Shan, X. Chen, and W. Gao, “Histogram of gabor phase patterns (HGPP): a novel object representation approach for face recognition,” IEEE Transactions on Image Processing, vol. 16, no. 1, pp. 57–68, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. H. V. Nguyen, L. Bai, and L. Shen, “Local gabor binary pattern whitened PCA: a novel approach for face recognition from single image per person,” in Advances in Biometrics, pp. 269–278, Springer, Berlin, Germany, 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. N.-S. Vu and A. Caplier, “Enhanced patterns of oriented edge magnitudes for face recognition and image matching,” IEEE Transactions on Image Processing, vol. 21, no. 3, pp. 1352–1365, 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. R. Mehta, J. Yuan, and K. Egiazarian, “Face recognition using scale-adaptive directional and textural features,” Pattern Recognition, vol. 47, no. 5, pp. 1846–1858, 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Zou, Q. Ji, and G. Nagy, “A comparative study of local matching approach for face recognition,” IEEE Transactions on Image Processing, vol. 16, no. 10, pp. 2617–2628, 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. X. Tan and B. Triggs, “Fusing gabor and LBP feature sets for kernel-based face recognition,” in Analysis and Modeling of Faces and Gestures, pp. 235–249, Springer, Berlin, Germany, 2007. View at Publisher · View at Google Scholar
  16. Y. Zhao and J. Su, “Sparse learning for salient facial feature description,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 5565–5570, IEEE, Hong Kong, China, May-June 2014.
  17. M. Stojnic, F. Parvaresh, and B. Hassibi, “On the reconstruction of block-sparse signals with an optimal number of measurements,” IEEE Transactions on Signal Processing, vol. 57, no. 8, pp. 3075–3085, 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. Z. Liu, D. Jiang, Y. Li, Y. Cao, M. Wang, and Y. Xu, “Automatic face recognition based on sparse representation and extended transfer learning,” IEEE Access, vol. 7, pp. 2387–2395, 2019. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Friedman, T. Hastie, and R. Tibshirani, “A note on the group lasso and a sparse group lasso,” 2010, http://arxiv.org/abs/1001.0736. View at Google Scholar
  20. Y. Xu, Z. Li, J. Yang, and D. Zhang, “A survey of dictionary learning algorithms for face recognition,” IEEE Access, vol. 5, pp. 8502–8514, 2017. View at Publisher · View at Google Scholar · View at Scopus
  21. H. Qiu, D. S. Pham, S. Venkatesh, J. Lai, and W. Liu, “Innovative sparse representation algorithms for robust face recognition,” International Journal of Innovative Computing, Information & Control, vol. 7, no. 10, pp. 5645–5667, 2011. View at Google Scholar
  22. P. J. Phillips, H. Moon, S. A. Rizvi, and P. J. Rauss, “The FERET evaluation methodology for face-recognition algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1090–1104, 2000. View at Publisher · View at Google Scholar · View at Scopus
  23. B. A. Draper, K. Baek, M. S. Bartlett, and J. R. Beveridge, “Recognizing faces with PCA and ICA,” Computer Vision and Image Understanding, vol. 91, no. 1-2, pp. 115–137, 2003. View at Publisher · View at Google Scholar · View at Scopus
  24. W. Deng, J. Hu, J. Guo, W. Cai, and D. Feng, “Robust, accurate and efficient face recognition from a single training image: a uniform pursuit approach,” Pattern Recognition, vol. 43, no. 5, pp. 1748–1762, 2010. View at Publisher · View at Google Scholar · View at Scopus