Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014, Article ID 157173, 19 pages
http://dx.doi.org/10.1155/2014/157173
Research Article

Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

Ying Chen,1,2,3,4 Yuanning Liu,1,2 Xiaodong Zhu,1,2 Fei He,1,2 Hongye Wang,1,2 and Ning Deng1,2

1College of Computer Science and Technology, Jilin University, Changchun 130012, China
2Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
3College of Software, Nanchang Hangkong University, Nanchang 330063, China
4Internet of Things Technology Institute, Nanchang Hangkong University, Nanchang 330063, China

Received 19 August 2013; Accepted 10 November 2013; Published 10 February 2014

Academic Editors: O. Greevy and S.-S. Liaw

Copyright © 2014 Ying Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region’s weights and then weighted different subregions’ matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

1. Introduction

Nowadays, biometric recognition has become a common and reliable way to authenticate the identity of a living person based on physiological or behavioral characteristics. Iris recognition is one of the most stable and reliable technology among biometric technologies some desirable properties such as uniqueness, stability, and noninvasiveness make iris recognition particularly suitable for highly reliable human identification.

Generally speaking, traditional feature extraction approaches and corresponding iris recognition system can be divided into five major categories roughly: phase-based approaches [13], zero crossing approaches [4], texture analysis based approaches [5], intensity variation analysis based approaches [6, 7], and other approaches [813]. Most of above-mentioned literatures need convert ring-shaped (polar coordinates) iris area to Cartesian coordinates to overcome the variations and then extract features from normalized rectangular iris pattern. However, some factors, such as changes in the eye gaze, non-uniform illumination, variations in orientation or scale may bring about iris images with different-level quality. Most of tradition feature extraction methods have two drawbacks when processing this kind of different-level image. Coordination transform may lead to feature missing because ring-shaped with different length. Proença and Alexandre [14] have pointed out that polar transformation may lead to problem of aliasing. They studied the relationship between size of captured iris image and its recognition accuracy and observed that the recognition accuracy reduces considerably due to change in area of iris. Most conventional methods of iris recognition are unable to achieve true rotation invariance. Rotation invariance is important for an iris recognition system since changes of head orientation and binocular vergence may cause eye rotation [15]. Scale invariant feature transformation (SIFT), firstly proposed by Lowe [16], which can effectively overcome the above-mentioned shortcomings to a certain degree. SIFT method is capable of extracting and matching points which are stable and characteristic between two images. It uses both image intensity and gradient information to characterize the neighborhood property of a given landmark. The algorithm includes scale-space extreme detection, feature localization, orientation assignment, feature descriptor, and feature matching [17]. The SIFT technique has already demonstrated its efficacy in Clinical CT images [1719], 3D images [20, 21], Omnidirectional images [22, 23], and Brodatz texture images recognition [24], and it also has been proposed using biometric recognition system based on face [25], palmprint [26], and iris images [12, 13, 15, 27, 28]. Soyel and Demirel [25] proposed a discriminative SIFT for facial expression recognition. They adopted key-point descriptors of SIFT features to construct facial feature vectors, and Kullback Leibler divergence was used for the initial classification and weighted majority voting based classifier to generate the final decision. Mehrotra et al. [12] pointed out that the scale invariant technique is found to be suitable for annular iris images because the iris size changes caused by expansion and contraction of pupil. However, traditional SIFT also has shortcomings. Soyel and Demirel [25] pointed out that the major drawback of standard SIFT technology is that it does not consider the location of the feature, which may further cause two feature keypoints corresponding to the minimum distance that could not be related to the same image part. Belcher and Du [13] proposed the region-based SIFT approach to iris recognition; they divided annular iris area into left, right and bottom subregions and then got the best matching score for three subregions, respectively; finally averaged all three best matching scores as overall matching score. However, their simply averaged operation is unreasonable method because three subregions have own different feature, and it should be assigned corresponding weights according to subregion’s intrinsic information.

In this paper, we propose an efficient iris recognition system based on optimal subfeature selection strategies and subregion fusion method. This recognition system is composed of two parts. The first part is discriminative subfeature selection based on finite-delete-sorting multistage strategy, and the second one is fusion subregion of segmented annular iris area. The goal of discriminative subfeature selection is to discard the redundant SIFT keypoint’s feature; the feature selection strategies include feature selection based on keypoint’s orientation, feature selection based on keypoint’s neighborhood magnitude, and compounded feature selection. The purpose of weighted subregion feature fusion is to overcome the major drawback of standard SIFT technology. First, we divide segmente iris annular area into three equally sized partitions in a nonoverlapping way. Second, weighted coefficients of subregion are obtained via training with particle swarm optimization (PSO) method. Finally, we adopt weighted subregion matching to achieve final decision.

The rest of the paper is organized as follows. We first describe the feature extraction and feature representations based on SIFT detailed in Section 2. Section 3 introduces our three proposed discriminative subfeature selection strategies. Section 4 mainly focuses on describing the process of subregion partition, corresponding weights assignment, and weighted subregion matching. Experimental results, comparisons with state-of-the-art methods, and discussion are represented in Section 5, respectively. Section 6 summaries this study.

2. Feature Extraction and Representation

Before extracting iris feature, the iris image needs to be preprocessed. Locating the iris area in an iris image is an important step. In the past several years, we have done some related work on iris image preprocessing; here we adopt a coarse-to-fine segmentation method based on adaptive level set, and this localization method can segment iris area accurately and exclude eyelashes and eyelids. Meanwhile, the drawbacks of coordination transformation have been described in the introduction section. Therefore, the following extensive experiments directly consider the annular region of iris without normalization. Examples of segmented iris images are shown in Figure 1.

fig1
Figure 1: Examples of segmented iris images. (a), (b), and (c) are original iris images, (a) from CASIA-V3 Interval database (S1002L04.jpg), (b) from CASIA-V3 Lamp database (S2005R02.jpg), and (c) from MMU-V1 database (yannl5.bmp, left) database, respectively. (c), (d), and (e) are segmented iris images which correspond to (a), (b), and (c).
2.1. Detection of Scale-Space Extreme

The first step is to construct a Gaussian scale space ; the input image is successively smoothed with a Gaussian function via where , and is the convolution operation in and .

Then the difference-of-Gaussian (DOG) images can be computed from the difference of two nearby scales separated by a constant multiplicative factor via

Each collection of DOG images and Gaussian-smoothed images of the same size is called an octave, and each octave of scale space is divided into an integer number , so . It is necessary to produce images for each octave for making the final extreme detection cover a complete octave. In this paper, we set scale number as 6 and octave number as 5, respectively. Figure 2 shows the Gaussian smoothed iris images and corresponding DOG images for change in octave, scale, and .

fig2
Figure 2: Detection of scale-space extreme. (a) Gaussian smoothed annular iris images for different octave, scale, and ; (b) corresponding DOG images.

In order to detect the local minima and maxima of DOG images , each sample point is compared to its eight neighbors in the current image and nine neighbors in the scale above and below and only sample point which is larger or smaller than all of these neighbors will be selected [29], and these minima and maxima points are called keypoints.

2.2. Keypoints Localization

Once the candidate keypoints have been detected, the next step is to perform a detailed fit to the nearby data for location, scale, and ratio of principal curvatures [29]. Any points, which have low contrast (and are therefore sensitive to noise) and poorly localized along an edge, should be rejected. In 2001, Lowe [30] adopted a 3D quadratic function to fit local sample points to determine the interpolated of the maximum. The threshold on minimum contrast and threshold on ratio of principal curvatures are applied, the former is to exclude low contrast points and the latter is to remove edge points. Therefore, SIFT provides a set of distinctive points which are invariant to scale, rotation, and translation as well as robust to illumination changes and limited changes of viewpoint [31]. Figure 3 shows the stages of keypoint selection on annular iris image using SIFT.

fig3
Figure 3: The stages of keypoints selection on annular iris image using SIFT. (a) shows the 267 keypoints at all detected maxima and minima of DOG function, and (b) shows the final 216 keypoints remaining after applying a threshold on minimum contrast (0.01) and a value of ratio of principal curvatures greater than 5.
2.3. Orientation Assignment

After determining the keypoints based on SIFT, a main orientation is assigned to each keypoint based on local image gradients. For each image sample , the gradient magnitude and orientation are computed as (3) and (4), respectively,

An orientation histogram is formed from the gradient orientations of each keypoint around a certain region. The orientation histogram has 36 bins for 360 degree range; keypoint is weighted by its gradient magnitude and by a Gaussian-weighted circular window with of 1.5 times of the scale of keypoint before adding it to orientation histogram.

The highest orientation histogram peaks and any other peaks with amplitudes within 80% of the highest peak are used to create keypoint with the computed orientation. The direction and scale of orientation are indicated by white color as shown in Figure 4.

fig4
Figure 4: (a) Scale and direction of orientation is indicated by arrow in white color; (b) 3D magnitude representation of detected keypoints from annular iris area.
2.4. Keypoint Descriptor Representation

In this step, a distinctive descriptor is computed for the local image region. Keypoints are transformed into representation called keypoint descriptors containing the values of all the orientation histogram entries [25]. A keypoint descriptor is characterized by the gradient magnitude and orientation at each keypoint in a region around a keypoint location. Figure 5 shows the process of keypoint descriptor formed.

157173.fig.005
Figure 5: The process of computation of keypoint descriptor. The corresponding gradient magnitude and orientation at each sample point in a region around keypoint location are computed firstly, as shown in (a). Then these samples are accumulated into orientation histograms summarizing the contents over subregions, with the length of each arrow corresponding to the sum of the gradient magnitudes near that direction within the region, as shown in (b). (a) Image gradients; (b) keypoint descriptor.

Lowe [29] pointed out that array of histograms with 8 orientation bins for each keypoint achieved the best results; hence, there are element feature vectors for each keypoint. In our work, we also adopt 128 element feature vectors for each keypoint.

3. Discriminative Feature Selections

In general, the iris recognition system should select compact and effective features based on the distinct characteristics of the representation data. From the previous work and discussions, the SIFT features may contain redundant features. Therefore, this paper adopts feature selection techniques for suitable feature subset and only select feature with more discriminative information and discard feature of the least useful information. We select discriminative feature based on three strategies, detailed process as follows:(1)to sort orientation probability distribution function (OPDF) based on keypoint’s key orientation in descending way and delete corresponding keypoints which have small orientation probability distribution, this operation means reducing number of keypoints;(2)to sort magnitude probability distribution function (MPDF) based on keypoint’s neighborhood magnitude in ascending way and delete keypoints corresponding feature elements which have larger magnitude probability distribution, the purpose of this operation is to reduce dimension of feature elements;(3)to reduce number of keypoints and dimension of feature element based on combined above two methods.

The ultimate purpose of select discriminative feature is to realize two goals.(1)minimization of the number of features,(2)maximization of the correct recognition rate and minimization of the equal error rate.

3.1. Discriminative Feature Selection Based on Orientation

OPDF of detected keypoints is shown in Figure 6; from this figure, the key orientation of 216 keypoints shows non-uniform distribution. Here, we equally divide into 20 intervals in anticlockwise way, and size of each interval is ; for convenience, these intervals are denoted from 1 to 20 (tabbed in red color). From Figure 6, it is can be seen 19 keypoints in ; however, there are only 2 keypoints in .

157173.fig.006
Figure 6: OPDF of keypoints’ primary orientation.

We deploy a sorting procedure among the OPDF according to their corresponding keypoint number. The OPDF is denoted by vector , and . The elements of are sorted in descending way, and denote the sorted vector by ; therefore, . Further, indicates that we only keep the first elements of sorted vector , and , . Owing to images matching method more focuses on key orientated keypoint, therefore, this study utilizes finite delete last several elements corresponding keypoint of , and just keypoints which have larger probability distribution were used as discriminative subfeature. In order to achieve optimal delete scheme, a finite-delete-sorting (FDS) method is used to select the optimal features. For each feature subset, evaluation of discriminative feature requires training the corresponding support vector machine (SVM) and computing its accuracy. The performance of the SVM classifier is estimated using a validation iris image database and is used to guide the FDS as shown in Figure 7.

157173.fig.007
Figure 7: Feature selection process.

SVM is a well-accepted approach for pattern classification due to its attractive features and promising performance [32]. For more details about SVM, one can refer to [32], which provides a complete description of the SVM theory. In order to make the linear learning machine work well in non-linear cases, the original input space is mapped into some higher-dimensional feature space by using a kernel function. In this study, radial basis function (RBF) is used as kernel function.

Here, it should be pointed out that the OPDF cannot be used with standard SVM, but the corresponding PDF of feature descriptor (FDPDF) is used as identified feature of SVM. There are two reasons for why we do so. First, the dimension of OPDF vector of each image class is too small (less than 20-dimension) to achieve a satisfactory classification accuracy of SVM necessarily. Second, due to subsequent matching method also based on keypoints descriptor, thus FDPDF is utilized as classification feature. The FDPDF data is normalized by scaling them into the interval of and the classes labels are assigned to FDPDF data. In order to gain an unbiased estimate of the generalization accuracy, the -fold ( is set at 10 in this study) cross-validation method is used to evaluate the classification accuracy. In the following experiments, the two important parameters ( and ) of SVM are also tuned for three experimental iris image databases.

A ranking procedure among the FDPDF is deployed according to their corresponding classification accuracy rate (CAR) scored by SVM. Instead of using all the keypoints’ feature, only the most discriminating FDPDF which own is used as optimal subfeature. For example, assuming that the optimal subfeature is when selecting discriminative feature for Figure 6(a), this result indicates that keypoints are deleted; hence, the numbers of keypoints decrease from 216 to 209.

3.2. Discriminative Feature Selection Based on Magnitude

In last subsection, the process of discriminative keypoints selection was introduced. In this section, the process of selecting feature descriptor based on magnitude will be described in detail. We adopt two ways to describe keypoints’ descriptor. The first way is FDPDF, which is generated according to the value of feature elements. The second way is neighborhood element probability distribution function (NEPDF), which is generated by 16 neighborhood of a keypoint. Figure 8 shows the detected keypoints and its corresponding FDPDF and NEPDF.

fig8
Figure 8: PDF of keypoints’ feature descriptor. (a) The detected keypoints, (b) FDPDF (c) NEPDF.

By comparing Figures 8(b) and 8(c), we can see FDPDF in mixed and disorderly way, but NEPDF in more clear way. Meanwhile, according to previous discussions, a detected keypoint’s feature descriptor is generated according to its 16 subregion’s 8 accumulated orientation bins magnitude, therefore, we focus on selecting discriminative subfeature based on NEPDF.

The specific processes of FDPDF and NEPDE calculation are as follows. Assuming that an image has keypoints after feature extraction based on SIFT method, and every keypoint with 128-dimension feature elements, then, a matrix of an image with rows and 128 columns can be formed. Let denote the element in the ’th row and the ’th in the matrix . A vector , which is called the ’th keypoint, can be obtained from the ’th of as

Further assuming that sum of neighborhood subregion is denoted by , then can be computed via

Therefore, the NEPDF can be calculated through

Further, the FDPDF can be computed via (8)

The NEPDF is sorted in ascending way, and then also utilize FDS strategy to delete some keypoint descriptor element. The distribution of NEPDF is denoted by vector , and . The elements of vector are sorted in ascending way, and denote sorted vector by , and , indicates that only keep the first elements of sorted vector, and , . Because standard SIFT adopts Euclidian distance to matching image pairs; therefore, in order to lessen matching distance, this study will finitely delete last several elements of . Here, SVM also is adopted to generate matching accuracy to evaluate the discriminative effectiveness of each feature subset.

For example, when we select discriminative feature for Figure 8(a) based on magnitude, as shown in Figure 8(c), its sorted vector is [10, 6, 7, 11, 5, 14, 2, 9, 3, 15, 12, 8, 1, 13, 16, 4]. Assume that the optimal subfeature is , and this result means feature elements of neighborhood 4, 16, and 13 are deleted. Figure 9 shows the change process of SIFT feature; the dimension of every keypoint’s feature reduces from 128 to 104.

157173.fig.009
Figure 9: Illustration of feature elements selection based on magnitude. (a) Original feature matrix. (b) Discriminative feature matrix after feature selection based on neighborhood magnitude.
3.3. Compounded Feature Selection Based on Orientation and Magnitude

After described discriminative feature selection based on orientation and magnitude, here, compounded feature selection based on combined orientation and magnitude will be introduced in detail. We name the function of compounded feature selection CFS, and the detailed procedure for the CFS is in Algorithm 1.

alg1
Algorithm 1: Function CFS.

After getting optimal subfeature by CFS, the number of detected keypoint will be reduced and the feature element with lower dimensions. Iris recognition based on the optimal subfeature will achieve better performance.

4. Iris Image Partition and Subpattern Feature Contributions Analysis

4.1. Iris Image Partition

In 2009, Belcher and Du [13] assumed that the relative position of features does not change despite scale, rotation, and dilation variance in iris images, and the features close to the pupil will remain close to the pupil and feature on the right side will never be on the right side of the iris. Three years later, in 2012, Soyel and Demirel [25] proposed grid-based approach to overcome the major drawback of standard SIFT technology. They further drew conclusions that there are three advantages of grid-based approach. The first advantage is that local matching within a grid constrains the SIFT features to match features from nearby areas only. The second advantage is increase of speed in matching since the number of features decrease. The major advantage is that grid-based method allows weighting regions, which assure that higher information carrying regions of the image are associated with higher weight values to be considered more significantly.

In subpattern based iris recognition methods, an iris image can be partitioned into a set of equally or unequally sized sub-images depending on user’s option. However, how to choose appropriate sub-image size which gives optimal performance is still an open problem [33]; this study will not attempt to deal with this issue in our work. Without loss generality, we divide segmented iris annular area into equally sized partitions in a nonoverlapping way. Segmented annular iris area is divided into three major sub-partition, which are denoted as upper, middle, and bottom sub-regions, and the partition result is shown in Figure 10.

fig10
Figure 10: The partition process of segmented iris region. (a) Original segmented iris region. (b) Upper, middle, and bottom subregions.
4.2. Weights of Subpattern Calculations with PSO

Although papers [13, 25] partition experimental images, these papers did not explain weights assignment process in detail. Moreover, some existing researches have demonstrated that the different segmented iris area has nonuniform feature information. Hollingsworth et al. [34] pointed out that not all the bits in an iris are equally useful. Ma et al. [6] presented that the regions closer to the pupil provide the most useful texture information for recognition. Tsai et al. [35] pointed out that the region closer to pupil usually contains more high-frequency components, the middle region consists of fewer and bigger irregular blocks, and the region closer to limbic is usually covered with the eyelid and sparse patterns. In our previous work, we have proved that different tracks of iris images have different feature information based on local quality evaluation. Therefore, we can draw conclusion that it is unreasonable that upper, middle, and bottom regions have the same weights from above-mentioned analysis. In order to assign reasonable weights for different subregions, this paper adopts training scheme to get related weighted coefficients for corresponding subregions. Meanwhile, particle swarm optimization (PSO) is adopted to accelerate training process. PSO was first developed by Kennedy and Eberhart [36], it is inspired by the social behavior of organisms such as bird flocking and fish schooling, which seeks to explore the search space by a population of particles, and more details can refer to [36].

Assume that , , and denote the corresponding weights of upper, middle, and bottom subregions, respectively and their values are scaled in the range . Hence, the primary purpose of PSO is to determine the parameters (, , and ). To evaluate the improvement of performance achieved by the information fusion, correct recognition rate (CRR) is adopted. CRR indicates the ratio of the number of samples being classified to the total number of test samples correctly. Moreover, two evaluation standards are utilized to assess whether training algorithm to meet end conditions, which are meeting maximum number of iterations and meeting Max(CRR) at many times. In the process of the PSO iterative optimization, the termination condition is the biggest CRR in a certain number of iterations. When meeting (9), then we think the training algorithm meet termination condition where is an extreme minimum value. Figure 11 shows a block diagram for the process of weights assignment with PSO.

157173.fig.0011
Figure 11: Block diagram for the process of weights assign with PSO.

After getting optimal weights for three subregions, without losing generality, these three weights are normalized to .

4.3. Weighted Subregion Matching

Matching between two images and is performed by comparing each keypoint based on their associated descriptors [28]. Generally, there are four steps in matching process.

Step 1. Assume that a keypoint in and its closest and second-closest points are and in .

Step 2. Calculate city distances and . Here, we should point out that standard SIFT adopts Euclidean distance to compute distance of keypoint pairs; this study presents feature matching solution of using city distance (CD) substitute for Euclidean distance to reduce the computational cost and speed up the matching process. Assuming 2 -dimension vector and , the CD of and can be calculated via

Step 3. Deciding whether points matching or not. If the ratio is smaller than predefined threshold value, then the points and are considered to be matching point-pairs. In this paper, a threshold of 0.85 is chosen for the ratio .

Step 4. Decide the matching score between two images based on the number of matched points.

As shown in Figure 12, such an aggregation is performed through a linear combination of the objectives. In the following experiments, training methods for three iris image databases are adopted to get weighted coefficients.

157173.fig.0012
Figure 12: The process of weighted matching.

5. Experimental Results and Discussions

5.1. Description of Iris Image Databases

Public and free iris image database includes CASIA (four versions) [37]. CASIA database contains near infrared images and is by far the most widely used on iris biometric experiments. The The CASIA-V3 Interval database, which contains 2639 iris images from 395 different classes of 249 subjects, each iris image in this database is an 8-bit gray-level JPEG file with a resolution of  pixels. The CASIA-V3 Lamp was collected using OKI’s hand-held iris sensor, which contains 16212 iris images from 819 different classes of 411 subjects. Each iris image in this database is an 8-bit gray-gray JPEG file with a resolution of pixels. Unlike CASIA-V3 Interval, CASIA-V3 Lamp images are with nonlinear deformation due to variations of visible illumination. MMU-V1 [38] iris database contributes a total number of 450 iris images which were taken using LG IrisAccess2000, these iris images are contributed by 100 volunteers with different age and nationality. They come from Asia, Middle East, Africa, and Europe; each of them contributes 5 iris images for each eye. The iris image databases together form diverse iris representations in terms of sex and ethnicity and conditions under which iris information was captured [39]. In our experiments, 700 iris images from 100 classes are selected randomly on CASIA-V3 Interval, 1000 iris images from 50 classes are selected randomly on CASIA-V3 Lamp, and all 450 images from 90 on MMU-V1 are selected to evaluate the proposed methods. The profiles of the databases used are represented in Table 1. Sample iris images are shown in Figure 13.

tab1
Table 1: The profiles of iris image databases.
fig13
Figure 13: Sample images from CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1 iris databases. (a) CASIA-V3-Interval, (b) CASIA-V3 Lamp, and (c) MMU-V1.
5.2. Evaluation Protocol

The metrics used for the quantitative evaluation of the proposed algorithm are the following.(1)False accept rate (FAR): the FAR is the probability of accepting an imposter as an authorized subject.(2)False reject rate (FRR): FRR is the probability of an authorized subject being rejected incorrectly.(3)Receiver operating characteristic (ROC) curve: the values of FAR and FRR at various threshold value can be plotted using ROC curve, and ROC curve is used to report the performance of the proposed method.(4)Equal error rate (ERR): the point is in the curve where is known as ERR [12], and the lower the ERR is, the better the algorithm is.(5)Correct recognition rate (CRR): several images are tested with one-to-many matching. The CRR is the ratio of the number of images correctly classified to the total number of tested images.

5.3. Experimental Methodology

To measure the performance of the proposed algorithms, extensive experiments are carried out at various levels. Here, we mainly focus on six major sets experiments.

The first set of experiments aim at selecting the optimal subfeature based on orientation. To achieve this purpose, the normalized FDPDF data of are divided into 10 subsets, respectively. Each time, one of the 10 subsets is used as the test set and the other 9 subsets are put together to form a training set. Then the average error across all 9 trials is computed. Finally, we design two loops of cross-validation [40, 41] to tune two important parameters ( and ) of SVM for three experimental iris image databases. The inner loop is used to determine the optimal parameters of the SVM classifier and the outer loop is used for estimating the performance of the SVM classifier. The parameter is set at 512, 8, and 2 and at 0.03125, 0.125, and 0.65 for the CASIA-V3 Interval, CASIA-V3 Lamp and MMU-V1, respectively, when the highest classification accuracy has been achieved with the RBF kernel.

The purpose of the second set of experiments is to select the optimal subfeature based on magnitude. In order to gain an unbiased estimate of the generalization accuracy, the 10-fold cross-validation method is used to evaluate the classification accuracy of the normalized NEPDF data of by SVM classifier. The parameter is set at 64, 16, and 32 and at 0.064, 0.25, and 0.10 for the CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1, respectively.

The goal of the third set of experiments is to further get optimal subfeature based on compounded feature selection strategy combined orientation and magnitude. Assuming that denotes the compounded subfeature, subscript correspond to and superscript correspond to . Similar to the second set of experiments, the normalization NFPDF data of as input to SVM classifier which also adopt the 10-fold cross-validation strategy. The parameter is set at 256, 32, and 16 and at 0.25, 0.63, and 0.018 for the CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1, respectively.

The fourth set of experiments is to analyze the performance of three proposed subfeature selection strategies. The performance is evaluated in verification mode and using ROC curve, which include EER, CRR, FAR, and FRR evaluation protocols.

In order to evaluate the performance of the proposed weighted matching method, the fifth set of experiments are as follows. Firstly, we get the matching rates of bottom, middle, and upper three subregions of annular segmented iris, respectively. Secondly, we assign corresponding weighted coefficients for three subregions. Finally, we get whole recognition accuracy rate and equal error rate.

In order to further analyze the efficiency of our proposed methods, we carry out some quantitative comparisons with some existing state-of-the-art methods.

All approaches are implemented using C++ (with OpenCV) and MATLAB 9.2 platform and simulated on 2.53 GHz Intel Core i3 CPU with 2.0 GB RAM.

5.4. Experimental Results and Performance Evaluation

In this section, we focus on analyzing the performance of our proposed methods which include discriminative feature selection strategies and weighted matching approach.

Figure 14 shows the classification accuracy of subfeature based on orientation selection strategy by SVM classifier; means the all detected keypoints’ features. From this figure, an overall drop trend of classification accuracy rate (CAR) is observed from subfeature to subfeature on CASIA-V3 (Interval and Lamp) iris databases, and the CAR of exhibits the highest CAR of 88.86% for CASIA-V3 Interval and 92.35% for CASIA-V3 Lamp, respectively; these experimental results show that the is the optimal subfeature for CASIA-V3 database. Similarly, the highest CAR of 79.78% is achieved in subfeature for MMU-V1, which further shows that is the optimal subfeature for MMU-V1 databases. Experimental results mean that subfeature selection achieved ideal effect.

157173.fig.0014
Figure 14: Classification accuracy of subfeature based on orientation selection by SVM classifier.

The diagrams of percentage of deleted keypoints based on orientation are shown in Figure 15. From this figure, it can be seen that the maximum percentages of deleted keypoints for single iris image are 3.76%, 3.95%, and 6.71% for CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1, respectively.

fig15
Figure 15: Percentage of deleted keypoints based on orientation. (a) CASIA-V3 Interval, (b) CASIA-V3 Lamp, and (c) MMU-V1.

Figure 16 shows the classification accuracy of subfeature based on magnitude selection strategy by SVM classifier. In this figure, subfeatures , , and achieve the highest CAR for CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1 database, respectively. For CASIA-V3 Interval, the CAR of all feature elements is 87.16%, but the CAR of is 89.29%. The CAR of is 92.70%, which is higher than that of (91.02%) for CASIA-V3 Lamp. In MMU-V1, its highest CAR is 79.28%, which is beyond absolute value 1.50% compared to CAR of . Hence, the subfeatures , , and are thought as the optimal subfeatures for CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1, respectively.

157173.fig.0016
Figure 16: Classification accuracy of subfeature based on magnitude selection by SVM classifier.

Figure 17 shows the classification accuracy of subfeature based on compounded selection strategy by SVM classifier. From this figure, it can be seen that the highest classification accuracy of 90.65%, 93.68%, and 80.68% on , , and for CASIA-V3 Interval, CASIA-V3 Lamps and MMU-V1 database, respectively. Therefore, the subfeatures , and are thought as the optimal subfeatures for CASIA-V3 Interval, CASIA-V3 Lamps and MMU-V1 based on compounded feature selection strategy.

157173.fig.0017
Figure 17: Classification accuracy of subfeature based on compounded selection by SVM classifier.

Considering Figures 15, 16, and 17 as a whole, we can seen that achieves the highest CAR compared to and for CASIA-V3 Interval, and achieves the highest CAR for CASIA-V3 Lamp, and the CAR of is the highest CAR for MMU-V1. Therefore, we can safely conclude that compounded selection strategy is the best subfeature selection method of all the three proposed feature selection methods.

Table 2 shows the different subregions’ weighted coefficients assignment for three iris image databases; the experimental results further demonstrate that it is unreasonable to simply assign the same weights to three subregions.

tab2
Table 2: Weighted coefficients assignment on subregion for three databases.

In order to further evaluate the performance of subfeature selection strategies and weighted matching method, only the optimal subfeature is used for comparing to the original whole feature in verification mode. Hence, we get FAR, FRR, and EER values for CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1 databases. Figure 18 shows the ROC curve of FAR/FRR for three iris image databases, respectively.

fig18
Figure 18: ROC curves of all keypoints feature and three subfeature selection strategies and weighted matching method. (a) ROC curve for CASIA-V3 Interval, (b) ROC curve for CASIA-V3 Lamp, and (c) ROC curve for MMU-V1.

From Figure 18, we find that subfeature has less EER than the all keypoints’ feature, and it is observed that the EERs are 0.932%, 1.864%, and 1.028% for all keypoint’s feature of CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1, respectively. After adopting subfeature strategy based on orientation, its corresponding ERRs values are decrease to 0.921%, 1.852%, and 1.018% and decreasing to 0.917%, 1.849% and 0.960% after subfeature based on magnitude; they further reduce to 0.897%, 1.826%, and 0.932% after adopting compounded subfeature strategy, respectively. Figure 18 also demonstrates a comparison of CRR at EER for subfeature and whole feature. From the comparison results, it is evident that the CRRs of subfeatures which are selected by feature selection strategies are high compared to CRRs of whole feature for three databases.

When further analyze the experimental results, we can find that weighted subregion matching performs reasonably well in term of EER and CRR for all of the three databases. EERs of weighted subregion matching are 0.875%, 1.812%, and 0.897% for CASIA-V3 Interval, CASIA-V3 Lamps and MMU-V1 database, respectively, which are the best results in all of EERs. CRRs of weighted subregion matching also achieved encouraging results if we take into account 98.478%, 98.917%, and 98.360% for CASIA-V3 Interval, CASIA-V3 Lamps and MMU-V1 databases.

Figure 19 shows the CRR curve of our proposed methods under different thresholds; here, it should be pointed out that the threshold is to judge whether two images are matching or not by utilizing the ratio of matching point pairs. From Figure 19, it is observed that with the increase of thresholds, the CRRs also increase rapidly for three databases. Moreover, it is still observed that the highest CRRs are achieved by weighted subregion matching method, and the highest CRRs are 99.82%, 99.93%, and 99.75% for CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1 databases, respectively.

fig19
Figure 19: CRR curve of proposed methods under different threshold. (a) CRR curve for CASIA-V3 Interval, (b) CRR curve for CASIA-V3 Lamp, and (c) CRR curve for MMU-V1.

From above experimental results, it is can be safely concluded that our proposed methods, which include subfeature selection strategies and weighted subregion matching approaches, are effective methods and can achieve low EER and high CRR.

5.5. Comparison with Existing Methods

In this stage, in order to further exhibit the efficiency of our proposed approach, we carry out a series of experiments to provide a comparative analysis of our proposed method with some state-of-the-art methods in terms of CRR and EER based on CASIA-V3 databases. As described in introduction section, there are five feature extraction and recognition approaches in existing literatures. The comparative works shown in Table 3, which are the best known among existing schemes iris recognition, can be divided into these five categories. Therefore, comparison results further demonstrate encouraging performance of our proposed methods. Table 3 summarizes the best results obtained by each method.

tab3
Table 3: Comparisons of CRR and EER.

We would like to point out here that in order to achieve unbiased comparisons results, some experimental results directly come from some published work, which are carried out on CASIA-V3 Interval and Lamp databases and shown in Table 3. From this table, it is observed that method reported in [42] has better CRR than the proposed method on CASIA-V3 databases. Bouraoui et al. [42] got those result by giving the accuracy that is defined as equal to ; however, this hypothesis may be unreasonable for performance evaluation. From Table 3, we can see that our proposed methods have less EER than the methods reported in [1, 6, 7, 9, 13, 42] preceded by the method proposed in [8, 43, 44] on the CASIA-V3 Interval and find that EER achieved in papers [4, 11, 45, 48] has less than our methods but EER achieved in papers [11, 45] has higher than that of the proposed methods this study on the CASIA-V3 Lamp database.

We also provide the computation complexity comparison between the various known methods and the proposed methods. From Table 4, it is observed that our proposed methods consume less time than the other methods reported in Table 4 if we take into account the whole time consumption. Here, it should be pointed out that the experimental results on CASIA-V1 database reported in Ma et al. [7] were achieved in a machine of 128 M RAM running at 500 MHz speed. Our experimental environment is better than Ma’s experimental environment; nevertheless, the resolution of CASIA-V1 image is pixels, which is equal to the resolution of CASIA-V3 Interval image but less than the resolution of CASIA-V3 Lamp image ( pixels). The proposed method is still computationally effective if taking into consideration processing high resolution image consume more time. The experimental results achieved in [47] were conducted on at 3.00 GHz Pentium IV PC with 1 GB RAM; this experimental environment is similar to ours and comparison results also demonstrate that our proposed methods’ computation complexity is still less compared to [47].

tab4
Table 4: Comparison of the computation complexity.

In 2008, Roy and Bhattacharya [47] pointed out that feature subset selection algorithms can be classified into two categories, which are the filter and wrapper approaches, based on whether the feature selection is performed independently of the learning algorithm or not to construct the verifier; our proposed subfeature selection methods should fall into filter category. Meanwhile, Roy and Bhattacharya [47] further pointed out are the major drawback of filter approach is that subfeature selection may depend on the representational and inductive biases when building the classifier. However, since our proposed subfeature methods are related to keypoints’ intrinsic property, which are orientation and neighborhood magnitude of keypoints, the proposed subfeature selection strategies are able to overcome the drawback of filter approach and achieve ideal result.

6. Conclusion and Future Work

In this paper, we develop an iris recognition system based on optimal subfeature selection strategies and a weighted subregion matching method. Firstly, we describe the process of feature extraction and feature presentation based on SIFT. Then, we propose three subfeature selection strategies. Finally, weighted subregion matching is proposed to get the whole final matching result. Three public accessible databases, which are CASIA-V3 Interval, CASIA-V3 Lamp, and MMU-V1 databases, are used in a series of experiments. The experimental results and comparison results demonstrate that our proposed methods can effectively improve the performance of iris recognition system with respect to CRR and EER.

From the experimentation, it is observed that our proposed discriminative subfeature selection strategies are able to discard the redundant keypoints and reduce the dimension of corresponding keypoints descriptor representation, and compounded feature selection method achieves the best effect among three proposed optimal feature selection strategies. The proposed subregion matching method can effectively overcome the major drawback of standard SIFT technology unconsidering the location of the feature. Moreover, assign weighted coefficients for three subregions of segmented iris by training scheme accord with iris intrinsic feature distribution characteristics, and PSO method also accelerates training process effectively. Based on getting reasonable weights, weighted subregion fusion strategy is able to further achieve encouraging performance.

More attention will be paid to evaluate the proposed system in more other iris image databases. In addition, we will continuously focus on investigate feature selection strategy and feature fusion method.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the anonymous referees for their thorough reviews and constructive comments. The research in this paper uses the CASIA databases provided by the Institute of Automation, Chinese Academy of Science [37], and MMU database provided by Multimedia University [38]. This research is supported by National Natural Science Foundation of China (Grant no. 60971089) and State Important Achievements Transfer Projects of China (Grant no. 2012258).

References

  1. J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1148–1161, 1993. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Daugman, “New methods in iris recognition,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 37, no. 5, pp. 1167–1175, 2007. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Daugman, “Statistical richness of visual phase information: update on recognizing persons by iris patterns,” International Journal of Computer Vision, vol. 45, no. 1, pp. 25–38, 2001. View at Publisher · View at Google Scholar · View at Scopus
  4. R. P. Wildes, J. C. Asmuth, G. L. Green et al., “A machine-vision system for iris recognition,” Machine Vision and Applications, vol. 9, no. 1, pp. 1–8, 1996. View at Google Scholar · View at Scopus
  5. W. W. Boles and B. Boashash, “A human identification technique using images of the iris and wavelet transform,” IEEE Transactions on Signal Processing, vol. 46, no. 4, pp. 1185–1188, 1998. View at Publisher · View at Google Scholar · View at Scopus
  6. L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 12, pp. 1519–1533, 2003. View at Publisher · View at Google Scholar · View at Scopus
  7. L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by characterizing key local variations,” IEEE Transactions on Image Processing, vol. 13, no. 6, pp. 739–750, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. C.-C. Tsai, H.-Y. Lin, J. Taur, and C.-W. Tao, “Iris recognition using possibilistic fuzzy matching on local features,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 42, no. 1, pp. 150–162, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. R. Zhu, J. Yang, and R. Wu, “Iris recognition based on local feature point matching,” in Proceedings of the International Symposium on Communications and Information Technologies (ISCIT '06), pp. 451–454, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  10. Z. Sun and T. Tan, “Ordinal measures for iris recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2211–2226, 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Zhang, Z. Sun, and T. Tan, “Perturbation-enhanced feature correlation filter for robust iris recognition,” IET Biometrics, vol. 1, no. 1, pp. 37–45, 2012. View at Google Scholar
  12. H. Mehrotra, B. Majhi, and P. Gupta, “Robust iris indexing scheme using geometric hashing of SIFT keypoints,” Journal of Network and Computer Applications, vol. 33, no. 3, pp. 300–313, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. C. Belcher and Y. Du, “Region-based SIFT approach to iris recognition,” Optics and Lasers in Engineering, vol. 47, no. 1, pp. 139–147, 2009. View at Publisher · View at Google Scholar · View at Scopus
  14. H. Proença and L. A. Alexandre, “Iris recognition: an analysis of the aliasing problem in the iris normalization stage,” in Proceedings of the International Conference on Computational Intelligence and Security (ICCIAS '06), pp. 1771–1774, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Huang, X. You, Y. Yuan, F. Yang, and L. Lin, “Rotation invariant iris feature extraction using Gaussian Markov random fields with non-separable wavelet,” Neurocomputing, vol. 73, no. 4–6, pp. 883–894, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV '99), vol. 2, pp. 1150–1157, September 1999. View at Scopus
  17. C. Paganelli, M. Peroni, M. Riboldi et al., “Scale invariant feature transform in adaptive radiation therapy: a tool for deformable image registration assessment and re-planning indication,” Physics in Medicine and Biology, vol. 58, no. 2, pp. 287–299, 2013. View at Google Scholar
  18. M. R. Daliri, “Automated diagnosis of Alzheimer disease using the scale-invariant feature transforms in magnetic resonance images,” Journal of Medical Systems, vol. 36, no. 2, pp. 995–1000, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. S. Pan, G. Shavit, M. Penas-Centeno et al., “Automated classification of protein crystallization images using support vector machines with scale-invariant texture and Gabor features,” Acta Crystallographica D, vol. 62, no. 3, pp. 271–279, 2006. View at Publisher · View at Google Scholar · View at Scopus
  20. S. Prakash and P. Gupta, “A rotation and scale invariant technique for ear detection in 3D,” Pattern Recognition Letters, vol. 33, no. 14, pp. 1924–1931, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. T. Darom and Y. Keller, “Scale-invariant features for 3-D mesh models,” IEEE Transactions on Image Processing, vol. 21, no. 5, pp. 2758–2769, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. H. C. Yang, S. B. Zhang, and Y. B. Wang, “Robust and precise registration of oblique images based on scale-invariant feature transformation algorithm,” IEEE Geoscience and Remote Sensing Letters, vol. 9, no. 4, pp. 783–787, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. Z. Arican and P. Frossard, “Scale-invariant features and polar descriptors in omnidirectional imaging,” IEEE Transactions on Image Processing, vol. 21, no. 5, pp. 2412–2423, 2012. View at Publisher · View at Google Scholar · View at Scopus
  24. J. Zhang and T. Tan, “Affine invariant classification and retrieval of texture images,” Pattern Recognition, vol. 36, no. 3, pp. 657–664, 2003. View at Publisher · View at Google Scholar · View at Scopus
  25. H. Soyel and H. Demirel, “Localized discriminative scale invariant feature transform based facial expression recognition,” Computers and Electrical Engineering, vol. 38, no. 5, pp. 1299–1309, 2012. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Mu, Q. Ruan, and S. Guo, “Shift and gray scale invariant features for palmprint identification using complex directional wavelet and local binary pattern,” Neurocomputing, vol. 74, no. 17, pp. 3351–3360, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. S. Khalighi, P. Tirdad, F. Pak et al., “Shift and rotation invariant iris feature extraction based on non-subsampled contourlet transform and GLCM,” in Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods (ICPRAM '12),, vol. 2, pp. 470–475, February 2012.
  28. A. F. Fernando, T. G. Pdero, R. A. Virginia et al., “Iris recognition based on SIFT features,” in Proceedings of the 1st IEEE International Conference on Biometrics, Identity and Security (BidS '09), pp. 1–8, September 2009.
  29. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at Publisher · View at Google Scholar · View at Scopus
  30. D. G. Lowe, “Local feature view clustering for 3D object recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I682–I688, December 2001. View at Scopus
  31. K. Mikolajczyk and C. Schmid, “Scale & affine invariant interest point detectors,” International Journal of Computer Vision, vol. 60, no. 1, pp. 63–86, 2004. View at Publisher · View at Google Scholar · View at Scopus
  32. V. Vapnik, Statistical Learning Theory, John Wiley & Sons, New York, NY, USA, 1998.
  33. J. Wang, B. Zhang, S. Wang, M. Qi, and J. Kong, “An adaptively weighted sub-pattern locality preserving projection for face recognition,” Journal of Network and Computer Applications, vol. 33, no. 3, pp. 323–332, 2010. View at Publisher · View at Google Scholar · View at Scopus
  34. K. P. Hollingsworth, K. W. Bowyer, and P. J. Flynn, “The best bits in an Iris code,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 6, pp. 964–973, 2009. View at Publisher · View at Google Scholar · View at Scopus
  35. C.-C. Tsai, J. Taur, and C.-W. Tao, “Iris recognition based on relative variation analysis with feature selection,” Optical Engineering, vol. 47, no. 9, pp. 1–11, 2008. View at Publisher · View at Google Scholar · View at Scopus
  36. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, December 1995. View at Scopus
  37. CASIA Iris image Databases, http://www.cbsr.ia.ac.cn/IrisDatabase.htm.
  38. MMU Iris Image Databases, http://pesona.mmu.edu.my/~ccteo.
  39. A. Abhyankar and S. Schuckers, “A novel biorthogonal wavelet network system for off-angle iris recognition,” Pattern Recognition, vol. 43, no. 3, pp. 987–1007, 2010. View at Publisher · View at Google Scholar · View at Scopus
  40. H.-L. Chen, B. Yang, G. Wang et al., “A novel bankruptcy prediction model based on an adaptive fuzzy k-nearest neighbor method,” Knowledge-Based Systems, vol. 24, no. 8, pp. 1348–1359, 2011. View at Publisher · View at Google Scholar · View at Scopus
  41. Y. Chen, F. Y. Yang, and H. L. Chen, “An effective iris recognition system based on combined feature extraction and enhanced support vector machine classifier,” Journal of Information and Computational Science, vol. 10, no. 17, 2013. View at Google Scholar
  42. I. Bouraoui, S. Chitroub, and A. Bouridane, “Does independent component analysis perform well for iris recognition?” Intelligent Data Analysis, vol. 16, no. 3, pp. 409–426, 2012. View at Google Scholar
  43. K. Roy, P. Bhattacharya, and C. Y. Suen, “Iris recognition using shape-guided approach and game theory,” Pattern Analysis and Applications, vol. 14, no. 4, pp. 329–348, 2011. View at Publisher · View at Google Scholar · View at Scopus
  44. L. Masek and P. Kovesi, “Matlab source code for a biometric identification system based on iris pattern,” in The School of Computer Science and Software Engineering the University of Western, 2003. View at Google Scholar
  45. Z. He, T. Tan, Z. Sun, and X. Qiu, “Toward accurate and fast iris segmentation for iris biometrics,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 9, pp. 1670–1684, 2009. View at Publisher · View at Google Scholar · View at Scopus
  46. K. Roy and P. Bhattacharya, “Iris recognition with support vector machines,” Lecture Notes in Computer Science, vol. 3832, pp. 486–492, 2006. View at Google Scholar · View at Scopus
  47. K. Roy and P. Bhattacharya, “Optimal features subset selection and classification for iris recognition,” Eurasip Journal on Image and Video Processing, vol. 2008, Article ID 743103, 20 pages, 2008. View at Publisher · View at Google Scholar · View at Scopus
  48. R. P. Wildes, “Iris recognition: an emerging biometrie technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348–1363, 1997. View at Publisher · View at Google Scholar · View at Scopus