Abstract

This paper proposed a novel 3D palmprint recognition algorithm by combining 3D palmprint features using D-S fusion theory. Firstly, the structured light imaging is used to acquire the 3D palmprint data. Secondly, two types of unique features, including mean curvature feature and Gaussian curvature feature, are extracted. Thirdly, the belief function of the mean curvature recognition and the Gaussian curvature recognition was assigned, respectively. Fourthly, the fusion belief function from the proposed method was determined by the Dempster-shafer (D-S) fusion theory. Finally, palmprint recognition was accomplished according to the classification criteria. A 3D palmprint database with 1000 range images from 100 individuals was established, on which extensive experiments were performed. The results show that the proposed method 3D palmprint recognition is much more robust to illumination variations and condition changes of palmprint than MCR and GCR. Meanwhile, by fusing mean curvature and Gaussian curvature feature, the experimental results are promising (the average equal error rate of 0.404%). In the future, imaging technique needs further improvement for a better recognition performance.

1. Introduction

Palmprint images can be acquired in a similar manner to face images and a number of researchers have suggested that the human palmprint is unique enough to each individual to allow practical use as a biometric. Several researchers have made palmprint recognition based on the palmprint’s appearance in 2D intensity images [110], whereas a smaller number of researchers have accomplished palmprint recognition using 3D palmprint information [1113]. Although 2D palmprint recognition techniques can achieve high accuracy, the 2D palmprint recognition may be liable to strong illumination variations and condition changes of palmprint and much 3D palm structural information is lost in the 2D palmprint acquisition process. Therefore, it is of high interest to explore new palmprint recognition techniques to overcome these difficulties. Intuitively, 3D palmprint recognition is a good solution.

In this paper, we will extract the local curvature features of 3D palmprint for recognition. After the 3D depth information of palm is obtained, ROI of the 3D palmprint image is extracted. Besides reducing the data size for processing, the ROI extraction process also serves to align the palmprints and normalize the area for feature extraction. The mean curvature (MC) and Gaussian curvature (GC) features of each cloud point in the ROI are then calculated. Finally, MC and GC features are fused by Dempster-Shafer (D-S) fusion theory; the input palm can be recognized. We established a 3D palmprint database with 1000 range images from 100 people. Extensive experiments are conducted to evaluate the performance of the proposed method.

This paper is organized as follows. The acquisition of 3D palmprint data is given in Section 2. In Section 3, we describe the ROI region determination and MC and GC feature extraction from ROI and MC and GC feature matching. The calculation of palm MC and GC features is described in detail. Section 4 presents the palmprint recognition fusion method by D-S fusion theory. In Section 5, we present the main experimental results and analyze the results. Section 6 gives the summary and conclusions.

2. 3D Palmprint Data Acquisition

In each acquisition session, the subject sits approximately 0.1 meters away from the sensor. Data was acquired with a CPOS MINI PLUSII range scanner. One 1280 1024 3D scan and one 1280 1024 color image were obtained in a period of several seconds. Figure 1 shows the 3D palmprint data acquisition device developed by Information and Communication Technology Research (ICTR) Institute, University of Wollongong.

As shown in Figure 2, a fringe pattern generated by the grating crosses the surfaces of the object at point and reaches the reference plane at point ; then the projected fringe is reflected to the CCD camera by the surfaces of the object through point .

The height of reference plane is 0. The relative height of a point at spatial position () to the reference plane is computed by the following [14]: where is the wavelength of the projected light on the reference plane, is the projecting angle, is the angle between the reference plane and the line that passes through the current point and the CCD center, and is the phase difference between points and . Because the phase of point on the 3D object is equal to the phase of point on the reference plane, can be calculated by the following:

By using (1) and the phase shifting and unwrapping technique [15], we can retrieve the depth information of the object surface by projecting a series of phase stripes on it (15 stripes are used). Six sample patterns of the stripes on the palm are illustrated in Figure 3.

With this processing, the relative height of each point could be calculated. The range data of the palm surface can then be obtained. In the developed system, the size of the 3D image is 1280 × 1024; that is, there are totally 1310720 cloud points to represent the 3D palmprint information. Figure 4 shows an example 3D palmprint image captured by the system.

3. Feature Extraction

3.1. ROI Extraction

In Figure 4, many cloud points, such as those in the boundary area and those in the fingers, could not be used in feature extraction and recognition. Most of the useful and stable features are located in the center area of the palm. In addition, at different times when the user puts his/her hand on the system, there will be some relative displacements of the positions of the palm, even if we impose some constraints on the users to place their hands. Therefore, before feature extraction, it is necessary to perform some preprocessing to align the palmprint and extract the central area of it. In this paper, we use the algorithm in [16] to extract the 3D ROI. Figure 5 shows the ROI of 3D palmprint.

By the ROI extraction procedure, the 3D palmprints are aligned so that the small translation and rotation introduced in the data acquisition process are corrected. In addition, the data amount used in the following feature extraction and matching process is significantly reduced.

3.2. MC and GC Feature Extraction

The authors have found that Gaussian and mean curvature combine these first and second fundamental forms in two different ways to obtain scalar surface features that are invariant to rotations, translations, and changes in parameterization [17]. Considering the rotation, translation, and even some deformation in the 3D palmprint cloud points, we use the mean and Gaussian curvatures to describe the surface of 3D palmprint.

Mean () and Gaussian () curvature images are computed [18] using the partial derivative estimate images: where where where

For more efficient computation, we first normalize the Gaussian curvature or mean curvature into by the following: where is the mean of the curvature. With (8), most of the curvature values will be normalized into the interval .

3.3. MC and GC Feature Matching

We use the method in REF [11] to calculate the matching score of MC and GC feature. Let denote the binarized MC/GC image in the database and denote by the input MC/GC binary image. Suppose the image size is . The matching score between and is defined as

4. Palmprint Recognition Method

4.1. Concept of D-S Evidence Theory

Suppose that is the set of independent proposition. The represents the set of . In this paper, the MC and GC matching results are evidence. The belief degrees of all the propositions are assigned through the basic probability function. The basic probability function is defined as follows.

The function must satisfy the following conditions:(1)the basic probability of impossible event is 0, ;(2); the sum of basic probability of all elements in is 1.Suppose that is the number of palmprint type, is the number of recognition method, is the weighted coefficient of recognition method, and is the correlation coefficient of the recognition method and palmprint type . Considering the effect on recognition of the number of palmprint and environment, the basic probability function can be determined by the following [19]: where

4.2. Dempster Fusion Theory

Suppose that and are two basic probability functions which have the elements of and , respectively. The combination basic probability is represented by (13).

Suppose that Then, The parameter represents the conflict degree of evidence.

In (13), if , then a basic probability is determined. If , then and contradict. So, Dempster fusion theory has much stronger belief after fusion.

4.3. Palmprint Recognition Rules

The evidence is fused by Dempster fusion theory. The problem of making decision is relative to the application. In this paper, the basic probability is adopted and it mainly includes the following.(1)The target palmprint has the largest believability.(2)The difference of the believability of the target palmprint and other palmprints must be more than a threshold.(3)The uncertainty region must be smaller than a threshold.(4)The believability of the target palmprint must be more than the uncertainty region.Suppose that If then is final recognition result. In general, . represents the uncertainty probability.

4.4. Procedure of Palmprint Recognition Method

The procedure of proposed algorithm is as follows.

The belief function of the mean curvature recognition and belief function of the Gaussian curvature recognition were assigned according to (10), respectively.

The fusion belief function was determined by (13).

Palmprint recognition was accomplished according to the recognition rules in Section 4.3.

In this paper, we adopt the mean curvature recognition method and Gaussian curvature recognition method [11]. Based on [11], the belief function and recognition results have some prior information by which the parameters of and can be determined. Specifically, and can be calculated by the accuracy of corresponding recognition method.

Figure 6 has shown the flow chart of palmprint recognition based on data fusion.

5. Experimental Results and Analysis

To verify the performance of the proposed algorithm, many experiments have been done on the database by 3D palmprint imaging device. The database consists of 1000 samples from 100 volunteers, including 50 males and 50 females. The 3D palmprint samples were collected in two separated sessions, and, in each session, five samples were collected from each subject. The average time interval between the two sessions is two weeks. Each palmprint has 10 images among which there are 2 normal images, 2 under illumination images, 2 over illumination images, 2 dirty images, and 2 sweat images. The original spatial resolution of the data is 1280 × 1024. After ROI extraction, the central part (256 × 256) is used for feature extraction and recognition. The value resolution of the data is 16 bits.

These 1000 images are divided into four groups by fivefold cross validation; each group has 200 images. Four groups of subblocks (containing 800 images) are used as training samples, and the remaining group (containing 200 images) is used as testing samples. The process repeated five times.

During the palmprint recognition, the probability of the mean curvature recognition and Gaussian curvature recognition can be represented by memberships which are coefficients of belief function corresponding to these two methods. Finally, the fused belief function is attained by fusion theory and final recognition is accomplished according to the fusion rules.

Table 1 shows the basic belief function of MCR and GCR and their fusion belief function and final recognition results of four people based on MCR and GCR data fusion.

From Table 1, for the testing samples O1, the basic believability of MCR and GCR is 0.5769 and 0.6878, respectively. After fusion, the combined basic believability increases to 0.8239. Meanwhile, the basic probability of the uncertainty of MCR and GCR is 0.0014 and 0.0435, respectively. After fusion, the combined basic probability decreases to 0.0003. Thus, the reliability of recognition is improved after fusion.

3D palmprint recognition [2022] is a conventional 3D palmprint recognition algorithm. The authors extract both line and orientation features (LOF) from enhancing the mean curvature image of the 3D palmprint data. Then, line and orientation features are fused at either score level or feature level for the final 3D palmprint recognition. The experimental results show that feature level fusion achieves higher accuracy than score level fusion.

To verify the effectiveness of the proposed algorithm, we compare the proposed algorithm with the MCR, GCR, and LOF method in [20] on the five classes images. In LOF [20] experiments, we adopted feature level fusion for 3D palmprint recognition without BLPF [20].

Table 2 shows the recognition results on the normal images, over illumination images, under illumination images, dirty images, and sweat images, respectively.

From experiment results, we can see that the proposed algorithm achieves much lower EER than MCR and GCR. The experimental results achieve the average equal error rate of 0.404%. However, LOF [20] achieves tiny lower EER than the proposed algorithm. This is mainly because the quality of 3D palmprint data in our experiments is not as good as that of 3D palmprint in LOF [20]. There is much corrupted noise in the data acquisition process and value accuracy needs further improvement.

Also, MCR and GCR D-S fusion palmprint recognition has good robustness to illumination change and condition changes of palmprint. So, the proposed 3D palmprint recognition has great potential.

6. Conclusions

In this paper, a novel palmprint recognition algorithm was proposed by combining 3D palmprint features using D-S evidence theory. After the 3D palmprint image is captured, the ROI is extracted to roughly align the palm and remove the unnecessary cloud points.

Then, two types of unique features, including mean curvature feature and Gaussian curvature feature, are extracted. A fast feature matching method and Dempster-Shafer (D-S) fusion strategies were used to classify the palmprints. A 3D palmprint database with 1000 samples from 100 individuals was established, on which extensive experiments were performed. The results show that the proposed method 3D palmprint recognition is more robust to illumination variations and condition changes of palmprint. Meanwhile, by fusing mean curvature and Gaussian curvature feature, much higher recognition rate can be achieved. In the future, imaging technique needs further improvement for a better recognition performance.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by Natural Science Foundation of China (no. 61203302), National Undergraduate Innovational Experimentation Program (201410060044), and Youth Science Research Funds for Tianjin University of Technology (LGYM201019).