Abstract

Palmprint biometrics is a promising modality that enables efficient human identification, also in a mobile scenario. In this paper, a novel approach to feature extraction for palmprint verification is presented. The features are extracted from hand geometry and palmprint texture and fused. The use of a fusion of features facilitates obtaining a higher accuracy and, at the same time, provides more robustness to intrusive factors like illumination, variation, or noise. The major contribution of this paper is the proposition and evaluation of a lightweight verification schema for biometric systems that improves the accuracy without increasing computational complexity which is a necessary requirement in real-life scenarios.

1. Introduction

Biometric identification systems are becoming increasingly popular and have been widely researched recently. They are applied as security systems, for example, detecting suspects in a crowd and finding out the identity of a person entering a plane or a restricted area. The key advantages of biometrics [1] are as follows: it is not possible to forget any token (as those tokens are actually parts of body or behaviour!), it is not required to carry any additional items (such as keys and badges), and the same biometric feature may be used in numerous cases (e.g., in biometric passport, in a local sport center, and to unlock a smartphone). Thus, biometrics is user-friendly, and therefore new methods and emerging modalities are still being proposed [2]. Currently, the key challenges of such systems are liveliness detection, vulnerability to attacks, time of computing (especially for systems with huge databases), users acceptance, privacy, and distortions (pose rotations or illumination conditions variation). Biometrics may be based on numerous traits such as fingerprint, palmprint, iris, voice, gait, and many others [3]. They can be either anatomical (physical) such as ear biometrics [4] and lips recognition [5] or behavioral such as keystroke dynamics [6] or mouse clicks [7]. Although fingerprint is the most popular biometric trait and iris seems to be the most reliable one [8], in our research we focused on palmprint images. Devices acquiring iris samples are very expensive, while fingerprint recognition is difficult when the finger is dirty. Palmprints have several advantages over other biometric traits [2, 911] since those are unique, formed during pregnancy, distinctive, and easy in self-positioning, have a rich structure, and may be based on low-resolution images and what is more, devices for sample acquisition are relatively cheap.

Therefore, our goal in this work is to propose a new lightweight verification schema based on palmprint images that may in the future be moved to mobile systems and scenarios. Moreover, palmprints are not associated with police operations or criminal investigations and thus are more appealing to end-users and societies.

This article is organized as follows: in Section 2, the biometric systems and multimodal biometrics are described, and in Section 3, the proposed method is presented. In Sections 4 and 5, results and conclusions are provided respectively.

The palmprint-based recognition was introduced more than 20 years ago. One of the first implemented systems was proposed in [12], where Gabor filters were utilised for feature extraction. The 2D Gabor filter was reused multiple times, for example, in [13, 14].

The most commonly used biometric verification system is composed of several steps enumerated by Zhang et al. in [12]: image acquisition, preprocessing, feature extraction, and feature matching. In the literature, there are numerous methods used in order to perform each of those steps.

Preprocessing is performed to achieve two goals [15]. The first one is image enhancement (reducing unwanted details and noises). The second is ROI extraction. Selection of the proper preprocessing method is meaningful and can strongly affect the whole verification system’s accuracy, a fact that was investigated in our previous work [16].

The summary of first approaches to palmprint recognition was presented in the book [17]. There, the set of possible features extracted from a palmprint was mentioned: principal lines, minutiae points, texture, and geometry. Various approaches to feature extraction have been presented in the literature. Several of them are based on varied transforms. Among others, there are the Hough transform used in [18], the Haar discrete wavelet transform implemented in [19], and the discrete cosine transform used in [20]. There are also some local descriptors applied to feature extraction: local binary patterns [21], SURF and SIFT descriptors [22], and histogram of oriented diagrams used in our previous work [16]. Another popular method is based on statistical principal component analysis (PCA) implemented in [23, 24], which was presented, for example, in [2527]. Yet another approach hones down on principal lines, as in [28]. In [29], Huang et al. emphasize the usefulness of principal lines in palmprint-based systems: (1) this approach is similar to human’s behaviour—in order to compare two palmprints, people instinctively compare the principal lines (line of head, heart, and life), (2) principal lines are more stable, more visible, and then wrinkles, and they are less affected by noises or illumination conditions, and (3) they can be used in retrieval systems, for example, in forensic issues.

Features extracted from palmprint images need to be matched. There are plenty of matching methods available. Commonly, they may be divided into two groups: simple distances and artificial intelligence methods. From the first group is it possible to enumerate: Euclidean distance [30], Hamming distance [31], and average sum-squares distance [32]. The popular artificial intelligence methods are neural networks [33], support vector machine (SVM) [34], or dedicated classifiers (multiclass projection extreme learning machine (MPELM)) as proposed in [35]. Meanwhile, hand geometry was proposed as a biometric feature. In [36], Yoruk et al. used independent component analysis (ICA) for feature extraction and a modified Hausdorff distance for matching.

Due to the insufficient accuracy of biometrics, the multimodal approach was proposed. There are numerous multiplications given in [37] and described as follows:(i)Multiple sensors: samples acquired by at least two sensors(ii)Multiple biometrics: analyzing, for instance, palmprint and fingerprint at the same time(iii)Multiple units: integrating information given by two or more fingers (possible when using fingerprints or irises) of a single user(iv)Multiple snapshots: analyzing more than one sample of the same trait taken by the same sensor(v)Multiple classifiers: extracting multiple features and using different classifiers, each for one feature

The advantages of multimodal biometrics were presented in [38]. It offers an improvement in the matching accuracy and is less sensitive to imposter attacks and to noise in the sensed data. Palmprints are widely implemented in such multimodal scenarios. In [34], Mokni et al. combined shape and texture in order to recognize the identity. However, shape in this approach deals with the shape of the principal lines, not with the shape of a person’s hand. Three principal lines are extracted as three curves based on steerable filter and hysteresis thresholding. The texture is investigated based on fractal analysis. A fractal object is a mathematical object which comes from an iterative process and is self-similar (its shape is repeated in various scales). Fractals are irregular and geometrically complicated. Based on fractals, the measure named “fractal dimension” was proposed and is calculated for multiple boxes. The highest obtained result was 98.32%. Fractal analysis, based on fractal theory, was also used in [39], where descriptor multiplication was proposed: the first one is the aforementioned fractal dimension and the second one is its generalization—the multifractal dimension descriptor. Mokni et al. used SVM and random forest algorithms for classification. The research was performed on two benchmark databases: PolyU and CASIA. The highest obtained result was 97%. In the next paper [40], another fusion method is proposed. Mokni et al. put forward using both Gabor filters and gray level concurrence matrix (GLCM). GLCM is a method that can be used in order to discover information about the statistical distribution of the intensities and about the relative position of neighbourhood pixels of the analyzed image as well. After feature extraction, they are classified by SVM, giving the highest result equal to 98.25%. Yet another approach to classifiers multiplication was presented in [41], where to the aforementioned fusion the Fractal Dimension was added. The proposed system was tested using PolyU and IITD databases. The results were again close to 97% in each experiment. There are also other articles presenting the fusion of two biometric traits: in [42] hand geometry and vein pattern were used and in [43] hand shape and hand geometry were used, while in [44] the fusion of palmprint features and iris pattern was proposed.

There are also some examples of using palmprint biometrics in a mobile scenario [4547]. It is clear that implementing a palmprint-based biometric system using a smartphone may be successful even though it carries some difficulties: complex background, changing illumination, hand pose variation, and last but not least a limited processing power [48]. In order to make easily and successfully implement the system in the mobile scenario, we focus on accuracy and processing time as well. The computational complexity is also a crucial parameter in this scenario. Therefore, we take those concerns into account while designing a novel verification schema for mobile scenarios described in detail in the next section.

3. Proposed Method

The general overview of the proposed method is presented in Figure 1. After sample acquisition, the preprocessing is performed. Then, both geometrical and texture features are extracted. The next step is the template matching and calculating the ratios between geometrical features. The last part constitutes the classification and gives the result: true for positive verification and false for the negative one.

3.1. Preprocessing

The proposed algorithm uses the hand shape and the palmprint texture. The consecutive steps of the preprocessing phase are presented in Figure 2.

Biometric samples (images) used in the research were obtained from the IITD database, which is available online (http://www4.comp.polyu.edu.hk/∼csajaykr/database.php). The exemplary sample from the database is presented in Figure 2(a). First, normalization and thresholding were performed (Figure 2(b)). Due to the variety of samples, the threshold was based on the average calculated from the whole sample. Then, the hand contour was detected and convex hull was found around the contour (Figure 2(c)). From the convex hull, convexity defects were extracted. The set of 9 key points (Figure 2(d)) was found from contours:(0)Top of the little finger(1)Valley between little and ring fingers(2)Top of the ring finger(3)Valley between ring and middle fingers(4)Top of the middle finger(5)Valley between middle and index fingers(6)Top of the index finger(7)Mass center of contour(8)Mass center of convex hull

Mass centers coordinates were calculated using equations (1) and (2), while M is expressed with equation (3), where x and y are the distance from the origin to the horizontal and vertical axis, i and j are the number of moments, and I is the intensity of a pixel:

This set of points was also used to extract ROI from the image. The ROI extraction was similar to our previous work described in [16].

The middle point and distance d were calculated between points 2 and 6. Then, the angle between these two points was found, and the whole image was rotated by this angle (Figure 2(e)). A square (dimensions ) is figured out, and this quadratic area became a ROI (Figure 2(f)). The advantage of this algorithm is the hand’s rotation invariance.

3.2. Geometric Feature Extraction

Then, the features extraction part is executed. Due to the possible future implementation in a mobile scenario, we decided to use a short feature vector. The short vectors should not be excessively challenging for mobile devices. The elements of the feature vector are presented in equation (4), where is the distance between the key points. Using the ratio of distances instead of raw distances ensures that the proposed method is invariant to scale variation. The distances are calculated using equation (5), where A and B are points between which the distance is estimated, , , , and , in which x and y are the coordinates of the points.

3.3. Matching

The next step is matching. First, texture-based template matching is used. There are multiple methods available. We decided to use three of them—CCOEFF, CCORR, and SQDIFF—in their normalized versions and compare the obtained results.

Before the equality is measured between two ROI images, they need to be resized to an equal size. To calculate the similarity, equations (6), (9), and (10) are used, where x, y and i, j are the coordinates of points, , , and are the width and height of the ROI, I and T are the base image ROI and test image ROI. Normalization ensures that the optimal result is equal to 1.where

Then, geometric features are compared using equation (11), where , is the i-element of the test image features vector, and is the i-element of the base image features vector. Again, the optimal result is equal to 1:

4. Classification, Experimental Setup, Results, and Discussion

In order to provide the higher possible results, multiple experiments were executed.

The presented results were obtained on a PC (64-bit Windows 8.1, CPU 4 × 1.7 GB, RAM: 4.00 GB). During experiments, the IITD database was used (600 elements in the database and 150 testing elements). The first approach was to add each element of the vector of features to the result of template matching and to compare the sum with a threshold α using the following equation:

Table 1 presents the accuracy reached and the difference between the system using only geometric features (row 1) and the system using a fusion of features (rows 2–4). Each texture-based method was able to improve the accuracy without having the time of computing increased significantly (time rose by 1.1–1.6%). Figure 3 presents ROC curves of the proposed methods.

Due to the observed increase in accuracy, during the next experiment, the classification was based on equation (13). It uses 5 geometric features but also used three texture-based methods: CCOEFF (), CCORR (), and SQDIFF () at the same time. The experiment produced the accuracy equal to 83%. Figure 4 presents EER of this approach.

Since no improvement was achieved and observed in the second experiment, we tested yet another experimental setup.

Since the most promising method was CCOEFF, which provided 9% of accuracy increase in the first approach, this method was selected for the next experiment. It is visible that the proposed classification method relies more on geometric-based features (we relay on 5 geometric features and only 1 texture-based one). Thus, we improve the matching step using the equation (14). Figure 5 presents charts of EER depending on the x parameter in range , while Table 2 contains the obtained results depending on the value of parameter x. The most promising result was 91% obtained for .

The highest obtained result from the proposed method reaches 91%. The value is comparable to some other studies available in the literature. Table 3 presents some palmprint and hand-geometry-based research.

5. Conclusions

In the paper, we have presented a lightweight palmprint-based verification system dedicated for mobile scenarios and reported promising results. The crucial point of the system lies in the fusion of two kinds of features: hand geometry and palmprint texture. Using multimodal biometrics ensures higher robustness to various intruding factors like illumination changes or noise on sensed data. The increase in accuracy was obtained without significant delay of computational time. Therefore, we proved that the proposed method is not computationally demanding. It is now being implemented on a mobile device in our ongoing work.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was funded under BS/30/2018 project, which had received funding from the Polish Ministry of Science and Higher Education.