Research Article  Open Access
A Novel Dictionary Learning Model with PTHLBP for Palmprint Recognition
Abstract
A novel projective dictionary pair learning (PDPL) model with statistical local features for palmprint recognition is proposed. Pooling technique is used to enhance the invariance of hierarchical local binary pattern (PTHLBP) for palmprint feature extraction. PDPL is employed to learn an analysis dictionary and a synthesis dictionary which are utilized for image discrimination and representation. The proposed algorithm has been tested by the Hong Kong Polytechnic University (PolyU) database (v2) and ideal recognition accuracy can be achieved. Experimental results indicate that the algorithm not only greatly reduces the time complexity in training and testing phase, but also exhibits good robustness for image rotation and corrosion.
1. Introduction
Biometric recognition is a key identification technology utilizing human’s physiological or behavioral characteristics, including facial, irises, fingerprints, palmprints, vein figuration, gaits, signatures, or speeches, all of which have the nature of uniqueness, measurability, and stability. As a new member of the biometric recognition family, palmprint has attracted considerable attention from researchers due to its own advantage of richness, stability, and other unique features. Since being proposed for the first time in 1998 by Shu and Zhang [1], massive methodologies for palmprint recognition have been presented and developed. The existing algorithms can be divided into three categories [2]: texturebased, linebased, and subspacebased methods.
Local Binary Pattern (LBP) is a powerful local image descriptor and great progress has been made in this research field recently, such as Local Texture Feature (LTF) [3], Dominant Local Binary Pattern (DLBP) [4], Local Derivative Pattern (LDP) [5], Completed Local Binary Pattern (CLBP) [6], and so on. Guo et al. [7] proposed a hierarchical multiscale LBP algorithm for pattern recognition. Although the HMLBP can effectively extract the palmprint texture features, it has the fatal weakness of high dimensions feature, which increase the computational burdens in the subsequent recognition processing. Inspired by the hierarchical ideal of the algorithm, hierarchical LBP using pooling technique is proposed. Experiments show that the algorithm can efficiently extract the texture features of palmprint.
Recently, sparse representation has been successfully applied to various image restoration [8, 9] or face recognition [10, 11]. In sparse representation, dictionary learning plays an important role and synthesis dictionary learning has been widely studied in recent years [12, 13]. Representation coefficients of an image are usually obtained via an norm sparse coding which has large amount of calculation. In the DPL dictionary learning model proposed by Gu et al. [14], the synthesis dictionary and analysis dictionary were trained jointly, which ensured that the representation coefficients could be approximated by a simple linear projection function. This algorithm achieved good results in face recognition. In this paper, this dictionary learning model for palmprint recognition is introduced, and a novel projective dictionary pair learning (PDPL) model with statistical local features is proposed.
2. Feature Extraction
2.1. Local Binary Pattern
The LBP operator is one of the best performing texture descriptors, and it has been widely applied into various fields. In the computation of the LBP, all nonuniform patterns are used as a single bin which loses a lot of useful information. Guo et al. [7] found that the percentage of information lost increased with the increase of the radius value. If the amounts to 3, about 30% of the valid information will be lost. Also the pattern was found nonuniform in a larger radius, while its counterpart is uniform in a smaller radius. He proposed a hierarchical multiscale LBP (HMLBP), which can retrieve useful classification information from those nonuniform patterns. But the dimension of the HMLBP feature is very high as shown in Figure 1, which shows the distribution of HMLBP feature dimension with hierarchical number when the size of the original palmprint image is 128 128.
Figure 1 shows that the dimension can reach up to 11200 if the hierarchical number is 3 and the neighbors’ number is 16. Such high dimension feature is extremely unfavorable for classification and it results in the fitting problems. A reduced dimension as well as better results can be obtained through aggregating statistic characteristics of different locations. Pooling technology can solve this problem. Yang et al. [15] used multipartition max pooling to enhance the SLF’s invariance to image registration error. So in this paper, a novel feature extraction algorithm based on hierarchical LBP is proposed using pooling technology (PTHLBP) combining the principle of HMLBP.
2.2. Palmprint Feature Extractions by PLHLBP
Pooling techniques are widely used in image classification to extract invariant features. In general, pooling methods have two categories, namely, max pooling and sum pooling. Denote the feature in a pool; the output feature is computed by in the case of sum pooling, and is computed by in the case of max pooling. The feature extraction flow chart of PLHLBP algorithm is presented in Figure 2.
The LBP feature (as shown in Figure 3(b)) of the palmprint image (as shown in Figure 3(a)) is extracted first, and then the LBP feature images are divided into levels. In the first level, the LBP feature image is divided into blocks (as shown in Figure 3(c)), and each block can be further divided into subblocks (as shown in Figure 3(d)), and so on. In each subblock of th level, a sequence of sliding boxes is created, whose height and width are in proportion to those of the subblock, and then the histogram of each sliding box’s LBP feature is computed. As an example, Figure 4 shows the feature generation in the th subblock. Supposed is the histogram feature extracted from the th sliding box and that possible sliding boxes are needed there to traverse this entire subblock. Then, feature vector is figured out. In the case of max pooling, the final output feature vector of the subblock is . In the case of sum pooling, the final output feature vector of the subblock is . Using the same method, the feature vector of all subblocks can be achieved. Supposed that feature vector of the subblock of the th block is , the output features of the th block would be . The feature vector of all blocks can therefore be calculated and the PTHLBP feature of the image is given by .
(a) Origin image
(b) LBP feature image
(c) The first level block partition
(d) The second level
(a) First sliding box and the output feature is
(b) Second sliding box and the output feature is
(c) th sliding box and the output feature is
(d) Last sliding box and the output feature is
3. Projective Dictionary Pair Learning Based on PLHMLBP
3.1. Projective Dictionary Pair Learning
In recent years, dictionary learning has been widely studied with sparse representation. Suppose is the training samples of classes, where denotes a training sample of class having training samples of each class. Dictionary learning model aims to learn an effective data representation dictionary from training samples for classification by exploiting the training sample label information. The general model of the stateoftheart dictionary learning algorithm can be summarized as follows:where denotes the dictionary to be learned, denotes the coefficient of over denotes the regularization constant, denotes the class label information of training sets, and denotes some discriminative promotion functions which improve the identification ability of and .
Gu et al. [14] proposed an algorithm named projective dictionary pair learning by learning a dictionary pair named synthesis dictionary and analysis dictionary . The following is the DPL model [14]:where is the synthesis dictionary to reconstruct and is the analysis dictionary to analytically code . The experimental result indicates that the method can not only yield completive accuracy but also greatly improve computing speed. In this paper, The model of PDPL [14] is used as an classifier for palmprint recognition.
3.2. Projective Dictionary Pair Learning Based on PLHMLBP
In the model of projective dictionary pair learning (PDPL), the role of is equivalent to , which is also nearly blocking diagonal. Suppose is a subdictionary to class ; then, the projection of the samples from class , on is a neatly null space: . According to the reconstructing principle of sparse representation, the synthesis subdictionary can reconstruct the training sample from class , and the reconstruction error can be computed by [14]The DPL model can be expressed by the following equation [14]:where is the other class training set except class . A variable matrix is introduced and the above problem is relaxed into the following [14]:
The process in detail of the optimization can be seen in reference [14]. In this dictionary learning model, a synthesis dictionary and an analysis dictionary are jointly learned which work together to perform representation and discrimination simultaneously. Experimental results in [14] evaluate that DPL exhibit highly competitive classification accuracy with stateoftheart DL methods. In this paper, DPL was introduced to palmprint recognition and a new projective dictionary pair learning model based on PLHMLBP was proposed. The steps of projective dictionary pair learning model based on PLHMLBP can be summarized as follows.(1)Extract the palmprint PTHLBP feature of the training and test samples, the HMLBP features of the training samples are denoted by , and the test samples are signified by .(2)Initialize and as random matrixes with frobenious norm, and update : This is a standard least squares problem and the solution is(3)Fix and update and . The solution of and can be obtained by (8) and (9) separately: (4)Repeat step and step until convergence; output the synthesis dictionary and the analysis dictionary.(5)Compute the regularized residuals: .(6)Output the identity of as: .
4. Experimental Results
PolyU palmprint database (version 2) consists of 386 different palms, and 20 samples are collected in two sessions for each palm. The using region of interest (ROI) is obtained by the algorithm proposed in reference [1], and the size of the ROI is 128 128. The trainning set are the samples collected by the first session. And the test set named Testdata1 are the samples collected by the second session. In order to test the validity of our method, every training sample is matched with all the test samples in the database. If two samples come from the same palm, the matching is referred to as genuine matching or intraclass. If two samples are not from the same palm, the matching is called impostor matching or interclass.
4.1. Parameter Setting
In the proposed algorithm, the parameters of PTHLBP and DPL should be set beforehand and the values of these parameters are shown in Table 1. Divide the LBP feature image into 2 levels in extraction of PTHLBP feature. In the first level, the whole LBP feature image is divided into blocks. Each block is divided into subblocks in the second level. The height and width of the sliding box are 0.5 times of those of the subblock.

4.2. The Influence of Training Sample Sizes on Recognition Rate
In order to verify the effectiveness of the proposed algorithm for small training samples, different training sample’s numbers of each class on Testdata1 are tested as shown in Figure 5.
Figure 5 shows that the recognition rate reached 99% when only two training samples of each class are used. When the training samples are three, the recognition rate reaches 99.5%. These results show that the proposed algorithm is effective for smallsample databases.
4.3. Robustness to Pose and Occlusion
In this section, the robustness of the proposed algorithm to pose variation and palmprint occlusion was tested. Two testing databases are built named Testdata2 which made some degree of angle rotating of Testdata1 and Testdata3 which simulate various levels of contiguous occlusion. Some samples of Testdata2 and Testdata3 are shown in Figure 6.
(a) Samples of Testdata1
(b) Samples of Testdata2
(c) Samples of Testdata3
Firstly, the robustness to pose variations using the Testdata2 was tested as shown in Table 2.

Table 2 shows that an ideal recognition rate of over 99.7% with 6 degrees pose variation can be achieved. When the pose variation changes to 10 degrees, the recognition rate still can reach 89.4%. This indicates that the proposed algorithm is robust to pose variation.
Secondly, the robustness to occlusion by the Testdata3 is tested as shown in Table 3.

Table 3 presents that the recognition rate can achieve an ideal rate of 99.7% when there is 10% occlusion. Even loosing half of the information, the recognition rate still reaches 69.4%, indicating the robustness to occlusion of the proposed algorithm.
4.4. Performance Comparison
In most DL methods, the or norm sparsity constraint on the representation coefficients adopted makes the training and testing phases time consuming. DPL learns jointly a synthesis dictionary and an analysis dictionary to achieve the goal of signal representation and discrimination. Hence, the performance of the proposed algorithm with conventional sparse representation (SR) methods was tested. If the same train database and test database are used, the testing time of our method and SR are 8.674−3 second and 1.49 second, respectively. This indicates that the proposed method can not only greatly reduce the time complexity in the training and testing phases but also lead to very competitive accuracies in a variety of visual classification tasks.
5. Conclusions
In this study, a novel projective dictionary pair learning (PDPL) model with PTHLBP is proposed. In the palmprint feature extraction, pooling technology is introduced to enhance the invariance of local binary pattern feature for image occlusion and pose variation. In the classification, PDPL is used to learn an analysis dictionary and a synthesis dictionary. Such a pair of dictionaries work together to perform representation and discrimination simultaneously. Experimental results indicate that the proposed method achieves excellent performance in both accuracy and speed of effective recognition.
Competing Interests
The authors declare that they have no competing interests.
References
 W. Shu and D. Zhang, “Palmprint verification: an implementation of biometric technology,” in Proceedings of the IEEE International Conference on Pattern Recognition, pp. 219–221, Brisbane, Australia, 1998. View at: Google Scholar
 X.M. Guo, W.D. Zhou, S.J. Geng, and Y. Wang, “A palmprint recognition algorithm based on horizontally expanded blanket dimension,” Acta Automatica Sinica, vol. 38, no. 9, pp. 1496–1502, 2012. View at: Publisher Site  Google Scholar
 X. Y. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1635–1650, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 S. Liao, M. W. Law, and A. C. Chung, “Dominant local binary patterns for texture classification,” IEEE Transactions on Image Processing, vol. 18, no. 5, pp. 1107–1118, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 B. C. Zhang, Y. S. Gao, S. Q. Zhao, and J. Liu, “Local derivative pattern versus local binary pattern: face recognition with highorder local pattern descriptor,” IEEE Transactions on Image Processing, vol. 19, no. 2, pp. 533–544, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Guo, L. Zhang, and D. Zhang, “A completed modeling of local binary pattern operator for texture classification,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1657–1663, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Guo, L. Zhang, D. Zhang, and X. Mou, “Hierarchical multiscale LBP for face and palmprint recognition,” in Proceedings of the 17th IEEE International Conference on Image Processing (ICIP '10), pp. 4521–4524, September 2010. View at: Publisher Site  Google Scholar
 J. Z. Huang, X. L. Huang, and D. Metaxas, “Simultaneous image transformation and sparse representation recovery,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08), pp. 1–8, Anchorage, Alaska, USA, June 2008. View at: Publisher Site  Google Scholar
 J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Nonlocal sparse models for image restoration,” in Proceedings of the IEEE 12th International Conference on Computer Vision, vol. 30, no. 2, pp. 2272–2279, Kyoto, Japan, 2009. View at: Google Scholar
 X. J. Lin, S. J. Geng, and Y. F. Pan, “Kernel sparse representation for image classification and face recognition,” in Proceedings of the 11th European Conference on Computer Vision, pp. 1–14, Crete, Greece, September 2010. View at: Google Scholar
 M. Yang and L. Zhang, “Gabor feature based sparse representation for face recognition with gabor occlusion dictionary,” in Proceedings of the 11th European Conference on Computer Vision, pp. 448–461, Crete, Greece, 2010. View at: Google Scholar
 J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2009. View at: Publisher Site  Google Scholar
 J. Mairal, F. Bach, and J. Ponce, “Taskdriven dictionary learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 4, pp. 791–804, 2012. View at: Publisher Site  Google Scholar
 S. Gu, L. Zhang, W. Zuo, and X. Feng, “Projective dictionary pair learning for pattern classification,” in Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS '14), pp. 793–801, Montreal, Canada, December 2014. View at: Google Scholar
 M. Yang, L. Zhang, S. C.K. Shiu, and D. Zhang, “Robust kernel representation with statistical local features for face recognition,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 6, pp. 900–912, 2013. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2016 Xiumei Guo and Weidong Zhou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.