Table of Contents Author Guidelines Submit a Manuscript
Mobile Information Systems
Volume 2019, Article ID 2325891, 9 pages
Research Article

Lightweight Verification Schema for Image-Based Palmprint Biometric Systems

UTP University of Science and Technology, Faculty of Telecommunications, Computer Science and Electrical Engineering, Al. Prof. S. Kaliskiego 7, 85-796 Bydgoszcz, Poland

Correspondence should be addressed to Agata Giełczyk; lp.ude.ptu@kyzcleig.ataga

Received 23 October 2018; Revised 11 January 2019; Accepted 3 February 2019; Published 18 February 2019

Academic Editor: Yuh-Shyan Chen

Copyright © 2019 Agata Giełczyk et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Palmprint biometrics is a promising modality that enables efficient human identification, also in a mobile scenario. In this paper, a novel approach to feature extraction for palmprint verification is presented. The features are extracted from hand geometry and palmprint texture and fused. The use of a fusion of features facilitates obtaining a higher accuracy and, at the same time, provides more robustness to intrusive factors like illumination, variation, or noise. The major contribution of this paper is the proposition and evaluation of a lightweight verification schema for biometric systems that improves the accuracy without increasing computational complexity which is a necessary requirement in real-life scenarios.

1. Introduction

Biometric identification systems are becoming increasingly popular and have been widely researched recently. They are applied as security systems, for example, detecting suspects in a crowd and finding out the identity of a person entering a plane or a restricted area. The key advantages of biometrics [1] are as follows: it is not possible to forget any token (as those tokens are actually parts of body or behaviour!), it is not required to carry any additional items (such as keys and badges), and the same biometric feature may be used in numerous cases (e.g., in biometric passport, in a local sport center, and to unlock a smartphone). Thus, biometrics is user-friendly, and therefore new methods and emerging modalities are still being proposed [2]. Currently, the key challenges of such systems are liveliness detection, vulnerability to attacks, time of computing (especially for systems with huge databases), users acceptance, privacy, and distortions (pose rotations or illumination conditions variation). Biometrics may be based on numerous traits such as fingerprint, palmprint, iris, voice, gait, and many others [3]. They can be either anatomical (physical) such as ear biometrics [4] and lips recognition [5] or behavioral such as keystroke dynamics [6] or mouse clicks [7]. Although fingerprint is the most popular biometric trait and iris seems to be the most reliable one [8], in our research we focused on palmprint images. Devices acquiring iris samples are very expensive, while fingerprint recognition is difficult when the finger is dirty. Palmprints have several advantages over other biometric traits [2, 911] since those are unique, formed during pregnancy, distinctive, and easy in self-positioning, have a rich structure, and may be based on low-resolution images and what is more, devices for sample acquisition are relatively cheap.

Therefore, our goal in this work is to propose a new lightweight verification schema based on palmprint images that may in the future be moved to mobile systems and scenarios. Moreover, palmprints are not associated with police operations or criminal investigations and thus are more appealing to end-users and societies.

This article is organized as follows: in Section 2, the biometric systems and multimodal biometrics are described, and in Section 3, the proposed method is presented. In Sections 4 and 5, results and conclusions are provided respectively.

2. Related Work

The palmprint-based recognition was introduced more than 20 years ago. One of the first implemented systems was proposed in [12], where Gabor filters were utilised for feature extraction. The 2D Gabor filter was reused multiple times, for example, in [13, 14].

The most commonly used biometric verification system is composed of several steps enumerated by Zhang et al. in [12]: image acquisition, preprocessing, feature extraction, and feature matching. In the literature, there are numerous methods used in order to perform each of those steps.

Preprocessing is performed to achieve two goals [15]. The first one is image enhancement (reducing unwanted details and noises). The second is ROI extraction. Selection of the proper preprocessing method is meaningful and can strongly affect the whole verification system’s accuracy, a fact that was investigated in our previous work [16].

The summary of first approaches to palmprint recognition was presented in the book [17]. There, the set of possible features extracted from a palmprint was mentioned: principal lines, minutiae points, texture, and geometry. Various approaches to feature extraction have been presented in the literature. Several of them are based on varied transforms. Among others, there are the Hough transform used in [18], the Haar discrete wavelet transform implemented in [19], and the discrete cosine transform used in [20]. There are also some local descriptors applied to feature extraction: local binary patterns [21], SURF and SIFT descriptors [22], and histogram of oriented diagrams used in our previous work [16]. Another popular method is based on statistical principal component analysis (PCA) implemented in [23, 24], which was presented, for example, in [2527]. Yet another approach hones down on principal lines, as in [28]. In [29], Huang et al. emphasize the usefulness of principal lines in palmprint-based systems: (1) this approach is similar to human’s behaviour—in order to compare two palmprints, people instinctively compare the principal lines (line of head, heart, and life), (2) principal lines are more stable, more visible, and then wrinkles, and they are less affected by noises or illumination conditions, and (3) they can be used in retrieval systems, for example, in forensic issues.

Features extracted from palmprint images need to be matched. There are plenty of matching methods available. Commonly, they may be divided into two groups: simple distances and artificial intelligence methods. From the first group is it possible to enumerate: Euclidean distance [30], Hamming distance [31], and average sum-squares distance [32]. The popular artificial intelligence methods are neural networks [33], support vector machine (SVM) [34], or dedicated classifiers (multiclass projection extreme learning machine (MPELM)) as proposed in [35]. Meanwhile, hand geometry was proposed as a biometric feature. In [36], Yoruk et al. used independent component analysis (ICA) for feature extraction and a modified Hausdorff distance for matching.

Due to the insufficient accuracy of biometrics, the multimodal approach was proposed. There are numerous multiplications given in [37] and described as follows:(i)Multiple sensors: samples acquired by at least two sensors(ii)Multiple biometrics: analyzing, for instance, palmprint and fingerprint at the same time(iii)Multiple units: integrating information given by two or more fingers (possible when using fingerprints or irises) of a single user(iv)Multiple snapshots: analyzing more than one sample of the same trait taken by the same sensor(v)Multiple classifiers: extracting multiple features and using different classifiers, each for one feature

The advantages of multimodal biometrics were presented in [38]. It offers an improvement in the matching accuracy and is less sensitive to imposter attacks and to noise in the sensed data. Palmprints are widely implemented in such multimodal scenarios. In [34], Mokni et al. combined shape and texture in order to recognize the identity. However, shape in this approach deals with the shape of the principal lines, not with the shape of a person’s hand. Three principal lines are extracted as three curves based on steerable filter and hysteresis thresholding. The texture is investigated based on fractal analysis. A fractal object is a mathematical object which comes from an iterative process and is self-similar (its shape is repeated in various scales). Fractals are irregular and geometrically complicated. Based on fractals, the measure named “fractal dimension” was proposed and is calculated for multiple boxes. The highest obtained result was 98.32%. Fractal analysis, based on fractal theory, was also used in [39], where descriptor multiplication was proposed: the first one is the aforementioned fractal dimension and the second one is its generalization—the multifractal dimension descriptor. Mokni et al. used SVM and random forest algorithms for classification. The research was performed on two benchmark databases: PolyU and CASIA. The highest obtained result was 97%. In the next paper [40], another fusion method is proposed. Mokni et al. put forward using both Gabor filters and gray level concurrence matrix (GLCM). GLCM is a method that can be used in order to discover information about the statistical distribution of the intensities and about the relative position of neighbourhood pixels of the analyzed image as well. After feature extraction, they are classified by SVM, giving the highest result equal to 98.25%. Yet another approach to classifiers multiplication was presented in [41], where to the aforementioned fusion the Fractal Dimension was added. The proposed system was tested using PolyU and IITD databases. The results were again close to 97% in each experiment. There are also other articles presenting the fusion of two biometric traits: in [42] hand geometry and vein pattern were used and in [43] hand shape and hand geometry were used, while in [44] the fusion of palmprint features and iris pattern was proposed.

There are also some examples of using palmprint biometrics in a mobile scenario [4547]. It is clear that implementing a palmprint-based biometric system using a smartphone may be successful even though it carries some difficulties: complex background, changing illumination, hand pose variation, and last but not least a limited processing power [48]. In order to make easily and successfully implement the system in the mobile scenario, we focus on accuracy and processing time as well. The computational complexity is also a crucial parameter in this scenario. Therefore, we take those concerns into account while designing a novel verification schema for mobile scenarios described in detail in the next section.

3. Proposed Method

The general overview of the proposed method is presented in Figure 1. After sample acquisition, the preprocessing is performed. Then, both geometrical and texture features are extracted. The next step is the template matching and calculating the ratios between geometrical features. The last part constitutes the classification and gives the result: true for positive verification and false for the negative one.

Figure 1: The overview of the proposed method.
3.1. Preprocessing

The proposed algorithm uses the hand shape and the palmprint texture. The consecutive steps of the preprocessing phase are presented in Figure 2.

Figure 2: Preprocessing part of the algorithm: (a) raw sample, (b) threshold, (c) contours and the convex hull, (d) key points, (e) rotation, and (f) ROI extracted.

Biometric samples (images) used in the research were obtained from the IITD database, which is available online (∼csajaykr/database.php). The exemplary sample from the database is presented in Figure 2(a). First, normalization and thresholding were performed (Figure 2(b)). Due to the variety of samples, the threshold was based on the average calculated from the whole sample. Then, the hand contour was detected and convex hull was found around the contour (Figure 2(c)). From the convex hull, convexity defects were extracted. The set of 9 key points (Figure 2(d)) was found from contours:(0)Top of the little finger(1)Valley between little and ring fingers(2)Top of the ring finger(3)Valley between ring and middle fingers(4)Top of the middle finger(5)Valley between middle and index fingers(6)Top of the index finger(7)Mass center of contour(8)Mass center of convex hull

Mass centers coordinates were calculated using equations (1) and (2), while M is expressed with equation (3), where x and y are the distance from the origin to the horizontal and vertical axis, i and j are the number of moments, and I is the intensity of a pixel:

This set of points was also used to extract ROI from the image. The ROI extraction was similar to our previous work described in [16].

The middle point and distance d were calculated between points 2 and 6. Then, the angle between these two points was found, and the whole image was rotated by this angle (Figure 2(e)). A square (dimensions ) is figured out, and this quadratic area became a ROI (Figure 2(f)). The advantage of this algorithm is the hand’s rotation invariance.

3.2. Geometric Feature Extraction

Then, the features extraction part is executed. Due to the possible future implementation in a mobile scenario, we decided to use a short feature vector. The short vectors should not be excessively challenging for mobile devices. The elements of the feature vector are presented in equation (4), where is the distance between the key points. Using the ratio of distances instead of raw distances ensures that the proposed method is invariant to scale variation. The distances are calculated using equation (5), where A and B are points between which the distance is estimated, , , , and , in which x and y are the coordinates of the points.

3.3. Matching

The next step is matching. First, texture-based template matching is used. There are multiple methods available. We decided to use three of them—CCOEFF, CCORR, and SQDIFF—in their normalized versions and compare the obtained results.

Before the equality is measured between two ROI images, they need to be resized to an equal size. To calculate the similarity, equations (6), (9), and (10) are used, where x, y and i, j are the coordinates of points, , , and are the width and height of the ROI, I and T are the base image ROI and test image ROI. Normalization ensures that the optimal result is equal to 1.where

Then, geometric features are compared using equation (11), where , is the i-element of the test image features vector, and is the i-element of the base image features vector. Again, the optimal result is equal to 1:

4. Classification, Experimental Setup, Results, and Discussion

In order to provide the higher possible results, multiple experiments were executed.

The presented results were obtained on a PC (64-bit Windows 8.1, CPU 4 × 1.7 GB, RAM: 4.00 GB). During experiments, the IITD database was used (600 elements in the database and 150 testing elements). The first approach was to add each element of the vector of features to the result of template matching and to compare the sum with a threshold α using the following equation:

Table 1 presents the accuracy reached and the difference between the system using only geometric features (row 1) and the system using a fusion of features (rows 2–4). Each texture-based method was able to improve the accuracy without having the time of computing increased significantly (time rose by 1.1–1.6%). Figure 3 presents ROC curves of the proposed methods.

Table 1: Obtained results: accuracy and accuracy improvement evaluated with experimentally set threshold.
Figure 3: ROC curves for the proposed methods: geometric, geometric + CCOEFF, geometric + CCORR, and geometric + SQDIFF.

Due to the observed increase in accuracy, during the next experiment, the classification was based on equation (13). It uses 5 geometric features but also used three texture-based methods: CCOEFF (), CCORR (), and SQDIFF () at the same time. The experiment produced the accuracy equal to 83%. Figure 4 presents EER of this approach.

Figure 4: Equal error rate for fusion of 5 geometric and 3 texture features: CCORR, SQDIFF and CCOEFF.

Since no improvement was achieved and observed in the second experiment, we tested yet another experimental setup.

Since the most promising method was CCOEFF, which provided 9% of accuracy increase in the first approach, this method was selected for the next experiment. It is visible that the proposed classification method relies more on geometric-based features (we relay on 5 geometric features and only 1 texture-based one). Thus, we improve the matching step using the equation (14). Figure 5 presents charts of EER depending on the x parameter in range , while Table 2 contains the obtained results depending on the value of parameter x. The most promising result was 91% obtained for .

Figure 5: FAR/FRR depending on the x parameter: (a) , (b) , (c) , and (d) .
Table 2: Obtained results: accuracy depending on the x parameter.

The highest obtained result from the proposed method reaches 91%. The value is comparable to some other studies available in the literature. Table 3 presents some palmprint and hand-geometry-based research.

Table 3: Comparison of the proposed method to some state-of-the-art methods using the fusion of hand geometry and palmprint texture features.

5. Conclusions

In the paper, we have presented a lightweight palmprint-based verification system dedicated for mobile scenarios and reported promising results. The crucial point of the system lies in the fusion of two kinds of features: hand geometry and palmprint texture. Using multimodal biometrics ensures higher robustness to various intruding factors like illumination changes or noise on sensed data. The increase in accuracy was obtained without significant delay of computational time. Therefore, we proved that the proposed method is not computationally demanding. It is now being implemented on a mobile device in our ongoing work.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This work was funded under BS/30/2018 project, which had received funding from the Polish Ministry of Science and Higher Education.


  1. A. K. Jain, “Biometric recognition: how do I know who you are?” in Proceedings of IEEE 12th Signal Processing and Communications Applications Conference, 2004, pp. 3–5, Kuşadası, Turkey, April 2004.
  2. M. Choraś and R. Kozik, “Contactless palmprint and knuckle biometrics for mobile devices,” Pattern Analysis and Applications, vol. 15, no. 1, pp. 73–85, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. J. A. Unar, W. C. Seng, and A. Abbasi, “A review of biometric technology along with trends and prospects,” Pattern Recognition, vol. 47, no. 8, pp. 2673–2688, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. M. Choras, “Image feature extraction methods for ear biometrics—a survey,” in Proceedings of 6th International Conference on Computer Information Systems and Industrial Management Applications (CISIM’07), pp. 261–265, IEEE, Elk, Poland, June 2007.
  5. P. Porwik, R. Doroz, and K. Wrobel, “An ensemble learning approach to lip-based biometric verification, with a dynamic selection of classifiers,” Expert Systems with Applications, vol. 115, pp. 673–683, 2019. View at Publisher · View at Google Scholar
  6. M. Choraś and P. Mroczkowski, “Keystroke dynamics for biometrics identification,” in Proceedings of International Conference on Adaptive and Natural Computing Algorithms (ICANNGA 2007), pp. 424–431, Springer, Warsaw, Poland, April 2007. View at Publisher · View at Google Scholar
  7. P. Panasiuk, M. Szymkowski, M. Dabrowski, and K. Saeed, “A multimodal biometric user identification system based on keystroke dynamics and mouse movements,” in Proceedings of 15th IFIP International Conference on Computer Information Systems and Industrial Management, pp. 672–681, Springer, Vilnius, Lithuania, September 2016. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Afsal, A. K. Rafeeq, J. Jothykumar, S. Ahmed, and F. Sayeed, “A novel approach for palm print recognition using entropy information features,” in Proceedings of International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), pp. 1439–1442, Chennai, India, March 2016.
  9. P. Dubey and T. Kanumuri, “Optimal local direction binary pattern based palmprint recognition,” in Proceedings of IEEE 2nd International Conference on Computing for Sustainable Global Development (INDIACom), vol. 3, pp. 1979–1984, New Delhi, India, March 2015.
  10. S. Jadhav, M. Raut, V. Humbe, and T. Kartheeswaran, “A low-cost contactless palm print device to recognize person based on texture measurement,” International Journal of Engineering, Technology, Science and Research, vol. 3, pp. 1–7, 2016. View at Google Scholar
  11. W. Jia, B. Zhang, J. Lu et al., “Palmprint recognition based on complete direction representation,” IEEE Transactions on Image Processing, vol. 26, no. 9, pp. 4483–4498, 2017. View at Publisher · View at Google Scholar · View at Scopus
  12. D. Zhang, W. Kong, J. You, and M. Wong, “Online palmprint identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp. 1041–1050, 2003. View at Publisher · View at Google Scholar · View at Scopus
  13. G. Jaswal, R. Nath, and A. Kaul, “Texture based palm print recognition using 2-d Gabor filter and sub space approaches,” in Proceedings of IEEE International Conference on Signal Processing, Computing and Control (ISPCC), vol. 25, pp. 344–349, Waknaghat, India, September 2015.
  14. H. Sherawat and S. Dalal, “Palmprint recognition system using 2-D Gabor and SVM as classifier,” International Journal of Innovative Technology and Research, vol. 4, pp. 3007–3010, 2016. View at Google Scholar
  15. L. Leng, G. Liu, M. Li, M. K. Khan, and A. M. Al-Khouri, “Logical conjunction of triple-perpendicular-directional translation residual for contactless palmprint preprocessing,” in Proceedings of 2014 11th International Conference on Information Technology: New Generations (ITNG), pp. 523–528, IEEE, Las Vegas, Nevada, USA, April 2014.
  16. A. Wojciechowska, M. Choraś, and R. Kozik, “Evaluation of the pre-processing methods in image-based palmprint biometrics,” in Proceedings of 9th International Conference on Image Processing and Communications Challenges, M. Choraś and R. S. Choraś, Eds., vol. 9, pp. 43–48, Bydgoszcz, Poland, September 2017. View at Publisher · View at Google Scholar · View at Scopus
  17. D. Zhang, Palmprint Authentication, Kluwer Academic Publishers, Dordrecht, Netherlands, 2004.
  18. K. Ray and R. Misra, “Palm print recognition using Hough transforms,” in Proceedings of International Conference on Computational Intelligence and Communication Networks (CICN), pp. 422–425, Jabalpur, India, December 2015.
  19. R. Ramteke and A. Alsubari, “Extraction of palmprint texture features using combined DWT-DCT and local binary pattern,” in Proceedings of IEEE 2nd International Conference on Next Generation Computing Technologies (NGCT), pp. 748–753, Dehradun, India, October 2016.
  20. J. Patil, C. Nayak, and M. Jain, “Palmprint recognition using DWT, DCT and PCA techniques,” in Proceedings of IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), pp. 1–5, Tamil Nadu, India, 2015.
  21. Y.-T. Luo, L.-Y. Zhao, B. Zhang et al., “Local line directional pattern for palmprint recognition,” Pattern Recognition, vol. 50, pp. 26–44, 2016. View at Publisher · View at Google Scholar · View at Scopus
  22. S. Verma and S. Chandran, “Analysis of sift and surf feature extraction in palmprint verification system,” in Proceedings of International Conference on Computer, Communication and Control Technology IC4T, pp. 27–30, Lucknow, India, November 2016.
  23. X. Bai, N. Gao, Z. Zhang, and D. Zhang, “3D palmprint identification combining blocked ST and PCA,” Pattern Recognition Letters, vol. 100, pp. 89–95, 2017. View at Publisher · View at Google Scholar · View at Scopus
  24. H. Li, J. Zhang, and L. Wang, “Robust palmprint identification based on directional representations and compressed sensing,” Multimedia Tools and Applications, vol. 70, no. 3, pp. 2331–2345, 2012. View at Publisher · View at Google Scholar · View at Scopus
  25. J. Chen, Y.-S. Moon, M.-F. Wong, and G. Su, “Palmprint authentication using a symbolic representation of images,” Image and Vision Computing, vol. 28, no. 3, pp. 343–351, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Franzgrote, C. Borg, B. Ries et al., “Palmprint verification on mobile phones using accelerated competitive code,” in Proceedings of IEEE International Conference on Hand-Based Biometrics (ICHB), pp. 1–6, Hong Kong, China, November 2011.
  27. Y. Xu, L. Fei, J. Wen, and D. Zhang, “Discriminative and robust competitive code for palmprint recognition,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 2, pp. 232–241, 2018. View at Publisher · View at Google Scholar · View at Scopus
  28. H. K. Kalluri and M. V. N. K. Prasad, “Palmprint identification using Gabor and wide principal line features,” Procedia Computer Science, vol. 93, pp. 706–712, 2016. View at Publisher · View at Google Scholar · View at Scopus
  29. D.-S. Huang, W. Jia, and D. Zhang, “Palmprint verification based on principal lines,” Pattern Recognition, vol. 41, no. 4, pp. 1316–1328, 2008. View at Publisher · View at Google Scholar · View at Scopus
  30. D. Aishwarya, M. Gowri, and R. Saranya, “Palm print recognition using liveness detection technique,” in Proceedings of IEEE Second International Conference on Science Technology Engineering and Management (ICONSTEM), pp. 109–114, Chennai, India, March 2016.
  31. L. Zhang, H. Li, and J. Niu, “Fragile bits in palmprint recognition,” IEEE Signal Processing Letters, vol. 19, no. 10, pp. 663–666, 2012. View at Publisher · View at Google Scholar · View at Scopus
  32. H. Imtiaz and S. Anowarul Fattah, “A histogram-based dominant wavelet domain feature selection algorithm for palm-print recognition,” Computers & Electrical Engineering, vol. 39, no. 4, pp. 1114–1128, 2013. View at Publisher · View at Google Scholar · View at Scopus
  33. L. Dian and S. Dongmei, “Contactless palmprint recognition based on convolutional neural network,” in Proceedings of IEEE 13th International Conference on Signal Processing (ICSP), pp. 1363–1367, Chengdu, China, November 2016.
  34. R. Mokni, H. Drira, and M. Kherallah, “Combining shape analysis and texture pattern for palmprint identification,” Multimedia Tools and Applications, vol. 76, no. 22, pp. 23981–24008, 2016. View at Publisher · View at Google Scholar · View at Scopus
  35. X. Xu, L. Lu, X. Zhang, H. Lu, and W. Deng, “Multispectral palmprint recognition using multiclass projection extreme learning machine and digital shearlet transform,” Neural Computing and Applications, vol. 27, no. 1, pp. 143–153, 2014. View at Publisher · View at Google Scholar · View at Scopus
  36. E. Yoruk, E. Konukoglu, B. Sankur, and J. Darbon, “Shape-based hand recognition,” IEEE Transactions on Image Processing, vol. 15, no. 7, pp. 1803–1815, 2006. View at Publisher · View at Google Scholar · View at Scopus
  37. A. Ross and A. Jain, “Multimodal biometrics: an overview,” in Proceedings of IEEE 12th European Signal Processing Conference, pp. 1221–1224, Vienna, Austria, September 2004.
  38. C. Taouche, M. Batouche, M. Berkane, and A. Taleb-Ahmed, “Multimodal biometric systems,” in Proceedings of IEEE International Conference on Multimedia Computing and Systems (ICMCS), pp. 301–308, Marrakech, Morocco, April 2014.
  39. R. Mokni, H. Drira, and M. Kherallah, “Fusing multi-techniques based on LDA-CCA and their application in palmprint identification system,” in Proceedings of 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), pp. 350–357, IEEE, Hammamet, Tunisia, October 2017.
  40. R. Mokni, M. Elleuch, and M. Kherallah, “Biometric palmprint identification via efficient texture features fusion,” in Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN), pp. 4857–4864, Vancouver, BC, Canada, July 2016.
  41. R. Mokni, A. Mezghani, H. Drira, and M. Kherallah, “Multiset canonical correlation analysis: texture feature level fusion of multiple descriptors for intra-modal palmprint biometric recognition,” in Proceedings of Pacific-Rim Symposium on Image and Video Technology, pp. 3–16, Springer, Wuhan, China, November 2017. View at Publisher · View at Google Scholar · View at Scopus
  42. P. Gupta, S. Srivastava, and P. Gupta, “An accurate infrared hand geometry and vein pattern based authentication system,” Knowledge-Based Systems, vol. 103, pp. 143–155, 2016. View at Publisher · View at Google Scholar · View at Scopus
  43. S. Sharma, S. R. Dubey, S. K. Singh, R. Saxena, and R. K. Singh, “Identity verification using shape and geometry of human hands,” Expert Systems with Applications, vol. 42, no. 2, pp. 821–832, 2015. View at Publisher · View at Google Scholar · View at Scopus
  44. S. Hariprasath and T. Prabakar, “Multimodal biometric recognition using iris feature extraction and palmprint features,” in Proceedings of 2012 International Conference on Advances in Engineering, Science and Management (ICAESM), pp. 174–179, IEEE, Tamil Nadu, India, March 2012.
  45. L. Fang and Neera, “Mobile based palmprint recognition system,” in Proceedings of IEEE International Conference on Control, Automation and Robotics (ICCAR), pp. 233–237, Singapore, May 2015.
  46. L. Leng, F. Gao, Q. Chen, and C. Kim, “Palmprint recognition system on mobile devices with double-line-single-point assistance,” Personal and Ubiquitous Computing, vol. 22, no. 1, pp. 93–104, 2017. View at Publisher · View at Google Scholar · View at Scopus
  47. A.-S. Ungureanu, S. Thavalengal, T. E. Cognard, C. Costache, and P. Corcoran, “Unconstrained palmprint as a smartphone biometric,” IEEE Transactions on Consumer Electronics, vol. 63, no. 3, pp. 334–342, 2017. View at Publisher · View at Google Scholar · View at Scopus
  48. H. Javidnia, A. Ungureanu, C. Costache, and P. Corcoran, “Palmprint as a smartphone biometric,” in Proceedings of IEEE International Conference on Consumer Electronics (ICCE), pp. 463–466, Las Vegas, NV, USA, January 2016.
  49. S. Barra, M. De Marsico, M. Nappi, F. Narducci, and D. Riccio, “A hand-based biometric system in visible light for mobile environments,” Information Sciences, vol. 479, pp. 472–485, 2019. View at Publisher · View at Google Scholar · View at Scopus
  50. R. Kozik and M. Choraś, “Combined shape and texture information for palmprint biometrics,” Journal of Information Assurance and Security, vol. 5, pp. 58–63, 2010. View at Google Scholar
  51. A. Kumar and D. Zhang, “Personal recognition using hand shape and texture,” IEEE Transactions on Image Processing, vol. 15, no. 8, pp. 2454–2461, 2006. View at Publisher · View at Google Scholar · View at Scopus