Research Article  Open Access
Rachida Tobji, Wu Di, Naeem Ayoub, "A Synthetic Fusion Rule Based on FLDA and PCA for Iris Recognition Using 1D LogGabor Filter", Mathematical Problems in Engineering, vol. 2019, Article ID 7951320, 11 pages, 2019. https://doi.org/10.1155/2019/7951320
A Synthetic Fusion Rule Based on FLDA and PCA for Iris Recognition Using 1D LogGabor Filter
Abstract
Iris recognition is one of the most useful methods to identify or verify people in biometric recognition systems. Iris patterns contain many features that distinguish people from each other. In this paper, a novel iris recognition method is proposed based on the fusion of Fisher Linear Discriminate Analysis (FLDA) with embedding Principal Component Analysis (PCA) method. In this work, firstly we use 1D LogGabor to elicit the iris features from an approximation part. Secondly, we obtain an appropriate degree of clarity for the iris with fusion of FLDA/PCA to eliminate the optical reflections on the iris image. Experiments of our proposed algorithm are performed on the CASIA V1 database. The results of our proposed approach show a good performance with recognition rate up to 99.99%.
1. Introduction
Recognition systems have become a role of the large and effective, especially after the progress that has occurred in the area of information technology. The biometric identification methods allow recognizing or verifying the identity of people with a higher degree of reliability [1]. The texture of the iris is remarkable for its ability to obtain systems with a very low error rate [2]. It was used for the first time in environments of high safety such as installations at highrisk nuclear power plants. However, this high level of performance can only be achieved at the cost of heavy restrictions imposed on the person in the course of acquisition [3, 4]. The prospects are to relax these constraints in order to make the systems more userfriendly, but the resulting image quality is strongly degraded due to blur and lighting variations. It has been shown that the use of particular information (shape of the eyelid, the inner corner of the eye, eyelashes…), in addition to the classic texture of the iris makes important improvements in this degraded context [5]. Figure 1 shows some examples of different contrast enhancement levels such as imadjust, clahe, and msr. For illustration, we have chosen one image from CASIA iris database V1.
The biometric authentication decision is based on a comparison between two irises. The texture must be transformed into a dimensionless coordinate system to handle variability such as pupil dilation. In the rubber sheet model introduced by Daugman [3], the two nonconcentric borders of iris and the texture were represented by a rectangle. The parametric contours of the iris region were used to unroll the texture of the iris to produce the normalized image. Finally, the identification of the pixels included in the iris image made it possible to generate a segmentation mask, which was used to remove artifacts from the normalized image at the mapping step.
Most of the approaches for iris segmentation look for contours from a gradient active contouring technique [6, 7]. In 1936, for the first time, Frank Bruch [8] proposed the idea of using the texture information of iris. Dr. Leonard Flom et al. [9] proposed the idea of a difference between the iris patterns of two different people. For the first time, in 1987, the same authors proposed using iris identification methods to identify the people in biometric security systems. In 1991, Daugman, an ophthalmologist, proposed a mathematical model for people identification through iris patterns [10]. In 1994, Daugman introduced the complete method for iris recognition [11]. Daugman’s iris identification method is one of the most successful methods and is mostly used in iris identification systems [12]. Iris recognition systems are more successful for the people identification on border controls and highly sensitive areas. UAE is one of the top successful countries to use this biometric security system [10], as more than 3 billion comparisons are made each day with the no false acceptance being observed according to the officials of Ministry of Interior. Iris recognition methods are also adopted by other agencies to identify people like Daugman whose system was used to identify an Afghan photographed in 2002. The iris identification systems show the probability of correctness close to 100% in identifying the women [13]. Wildes et al. [14] constructed the iris texture with Laplacian pyramid with 4level different resolutions. They used normalized correlation scheme to determine the similarity between the input images and model image. Lim et al. [15] proposed a new method for iris texture information extraction by decomposing the image into four levels with 2D Haar wavelet transformation. They quantized the high frequency to form 87bit code. Bae et al. [16] used independent component analysis to derive iris signals and projected them onto a bank of basis vectors. They also used quantization method for the resultant coefficient as feature. Sepehr et al. [17] used the real term of 1D Gabor filter and reduced the dimensionality of the extracted features by 2DPCA. WenShiung et al. [18] presented a biometric iris recognition using 2DLDA embedding with 2DPCA and Euclidean distance to recognize the iris pattern by comparing the iris features with the iris features enrolled in the database. Fusion based techniques such as datalevel fusion, featureslevel fusion, and features extractionlevel fusion are also actively used in recognition and tracking algorithms [19]. Effective fusion of RGB and infrared modalities is very important for exploiting the correlation between the heterogeneous modalities [20, 21]. Lan et al. [22] proposed a joint sparse representation algorithm for featurelevel fusion. In this method, featurelevel fusion is performed on reliable features by ignoring the detected unreliable features.
In this paper, a new PCA and Fisher LDA fusion based iris recognition by using 1D logGabor filter algorithm is developed.(1)In our proposed method, to find an optimal transformation, we use FLDA method that utilizes the conventional Fisher criterion to minimize the withinclass distance and maximize the betweenclass distance. Then feature vectors are formed by applying PCA on the approximation band.(2)The eigenvectors of the covariance matrices computed by FLDA and PCA methods are fused into a single covariance matrix. The resultant fused matrix is then used to compute the eigenvectors for data projection.(3)We integrate the pixels’ data multiplied by filters and coefficients over their support domain. Then the image texture information is extracted and encoded to mark the corrupted bits in the template by its associated noise mask of feature template.
Remaining sections in this paper are organized as follows: related work is briefly discussed regarding iris biometric systems based on hamming distance with feature encoding schemes adopted by different researchers in Section 2. Detailed description of iris recognition system components with preprocessing, that is, image acquisition, segmentation and normalization, feature extraction and encoding, and matching with proposed Iris detection method are discussed in Section 3. Experimental results and discussion are presented in Section 4. Finally, conclusion is made in Section 5.
2. Related Works
Many researchers have used different segmentation, analyzation, and characterization techniques in their iris detection methods. For the segmentation of the iris, two methods were commonly used: the integrodifferential operator [23] and the Hough transform [24–28]. For the characterization of the iris, the most useful methods are the Gabor wavelet transform applied by Daugman, the Gabor filter [24], the Laplacian pyramid [25], and orientable pyramid transform method [26]. We have studied various wellknown algorithms for iris recognition [17, 18, 27, 29–35] and propose a new method for iris detection based on the fusion of FLDA/PCA. We also employ 1D LogGabor filter and used hamming distance for comparison between two iris templates. We have compared the results of our proposed iris recognition approach with stateofthe art algorithms.
2.1. Proposed Method
In this paper, we use a statistical method to account eyelashes [6]; we apply the feature extraction algorithm based on fusion of the FLDA and PCA on original image patches to compute the projection matrices. Based on these matrices’ information, the iris images can be analyzed to lower dimension. The recognition is done with the help of an iris matcher with 1D LogGabor filter features based on the hamming distance to uniquely identify iris. Figure 2 shows flow diagram of iris biometric system, which is described in detail in the following subsection. In our experiments, for the purpose of illustration, we use CASIA V1 database [36]. All images in this database are grayscale and the value of each pixel is a single sample.
2.2. Preprocessing
2.2.1. Image Acquisition
The acquisition of an iris image is considered one of the most important parts in biometric systems. In the eye image, an iris is a dark object located behind the cornea, which is a highly reflective mirror. It is a very difficult object to make a photograph with all these characteristics. Once the image of the iris is acquired, an iris system can be composed of several modules comprising iris recognition: segmentation, normalization, and finally correction of light and contrast enhancement. For the illustration purpose, we use images from CASIA iris v1 database.
2.2.2. Iris Segmentation
The image of the eye which is acquired does not only include the information of the iris. It is necessary to segment and isolate this information of the image from the rest. Segmentation process is based on isolating the iris from the white area of the eye and the eyelids, as well as detecting the pupil inside the disc of the iris. Generally, the iris and pupil are approximated by circles and eyelids by ellipses. We applied the Hough transform [25, 37, 38] which is a technique that can be used to isolate objects of simple geometric shapes in the image. In general, we limit ourselves to the lines, circles, or ellipses present in the image. Figure 3 shows the segmentation results of the iris image. One of the great advantages of the Hough transform is that it is tolerant to occlusions in the desired objects and remains relatively unaffected by noise. Figure 4 shows general diagram of an iris segmentation system.
(a)
(b)
2.2.3. Iris Normalization
The iris is a disc pierced inside the pupil. The two circles that constitute the borders of the iris with the white area of the eye and the borders of the pupil with the iris are not perfectly concentric. In addition, with the contractions and dilations of the iris and the variation of the acquisition distances between people and the lens, the size of the disk of the iris is not always constant. To allow a comparison of two irises, it is therefore necessary to normalize the iris detected at a fixed size.
For this, it is a question of transforming the region of the iris characterized by parametric outlines in a band of invariant size. Daugman [3] proposed using the rubber sheet transformation to make the transition from a Cartesian system to a pseudopolar system normalization of the disk of the iris whose pictorial meaning could be seen as an attempt to extend the disk of the iris like rubber sheet. The process can be explained as follows.
Each pixel of the iris in the Cartesian domain is assigned by a correspondent bit in the pseudopolar domain according to the distance of the pixel from the centers of the circles and the angle. More precisely, the transformation is done as follows:where represents the abscissa of the point of the detected boundary of the pupil segment that passes through and represents the ordinate of this same point and the center of the pupil that makes an angle with a chosen direction. Then and represent the coordinates of the points obtained by the same principle but on the contour of the iris. The normalized image is rectangular of constant with the size of 80 ×512 pixels. The width of the image represents the variation on the angular axis, while the height represents the variations on the radial axis. Figure 5 shows illustration of the iris normalization module.
(a) Standardization
(b) Rubber sheet transformation
2.2.4. Iris Enhancement
In this step, homomorphic filter is used for iris image enhancement. To apply this filter, first, we reduced the contribution of illumination using the illuminationreflectance model [39]. By using this technique, an image can be expressed as the product of the amount of illumination reflected by the objects in the scene and the illumination component as follows:where and are the reflectance component and the illumination component, respectively.
Homomorphic filtering is used to enhance the reflectance and reduce the contribution (only high frequencies) of illumination retaining. However, the two components should be independent. In order to separate them, we applied the logarithm transform on (3) as follows:Then, the Fourier transform is applied:Equation (5) can be written aswhere and represent the Fourier transforms of and , respectively. In the frequency domain, represents the high passed by means of a filter function [40]. The filtered version can be calculated as follows:Then we take the inverse Fourier transform of (8):By applying the exponential operation on , we obtained the filtered image :Finally, we calculated the high pass filter as follows:where is the cutoff distance from the center and n defines the order of the filter. is given bywhere is the number of rows and is the number of columns of the original image. Figure 6 shows the homomorphic filtering process.
Algorithm 1 and Figure 7 show the procedure of iris preprocessing.
(1) Function PreProcessing(images)  
(2) PreProcessed [  
(3) for image images do  
(4) eyes croptoeyes(image)  
(5) eyes greyscale(eyes)  
(6) eyes segmentation(eyes)  
(7) iris detectioniris(eyes)  
(8) iris normalization(iris)  
(9) enhancement Homomorphicfilter(iris)  
(10) PreProcessed PreProcessed enhancement  
(11) return PreProcessing 
(a)
(b)
(c)
(d)
(e)
3. Feature Extraction and Encoding
3.1. FLDA/PCA
In previous works, many feature extraction algorithms have been proposed; in this section, we describe our FLDA/PCA fusion based iris feature extraction scheme.
FLDA utilizes the conventional Fisher criterion and minimizing the withinclass distance and maximizing the betweenclass distance to find an optimal transformation, accepting that there are C training classes; N = 64 × 256. The withinclass matrix and the betweenclass matrix are calculated in order aswhere describes number of samples class of and means image of class . is selected in the nonsingular optimal projection as the matrix that maximizes the ratio of the betweenclass scatter matrix of projection samples; the determinant of the withinclass scatter matrix of the projection samples is defined aswhere is the set of generalized eigenvectors of and and are eigenvalues for the advantage of classspecific linear projection. The trace of and can be defined in mathematical form as follows:Hence, by trace of , it can be calculated asFor finding the optimal transformation , we should maximized trace and minimize which is represented as . The result can be calculated as follows:To solve this equation, we apply the PCA to the matrix .
The recognition rates are achieved with eigenmethod for dimensionality reduction and simple classifiers are used in the reduced features space by applying specific linear methods [41]. The training of iris detector by PCA is presented as the following steps:(1)Define the mean of the input eyes images.(2)Obtain the meanshifted images by subtracting the mean from the input images.(3)Calculate and determinate the eigenvectors and eigenvalues of the meanshifted images of with , , and (4)Order the eigenvectors in decreasing order by their corresponding eigenvalues.(5)Retain only the largest eigenvalues with the eigenvectors.(6)Use the retained eigenvectors for a project of the meanshifted images into the eigenspace.(7)Allow to compute the eigenvectors and eigenvalues of .(8)Let A= where it gives transforming space .(9)The iris image is presented by iris image code as (X) where .(10)Finally reduce the data dimension using 1D LogGabor.
3.2. 1D LogGabor
The purpose of this step is to extract a characteristic and discriminating representation of the iris and to produce a template for verification in the pattern comparison step. In general, the main existing methods in the literature use the 1D LogGabor filter [27, 42, 43], the 2D Gabor filter [3, 6, 44, 45], the wavelet transform [46–48], or the discretionary cosine transform [49]. A comparative study of the different extraction filters of iris characteristics reported in the literatures has been proposed to identify their impact on iris recognition. This study showed that the LogGabor filter provides the best recognition performance.
LogGabor filter is used to create the templates based on information of iris pattern in feature encoding step [50]. Difference between the pixelintensity levels represents the difference between two iris mages and error that occurred while comparing lighting. To overcome this issue, Daugman [11] used normalization method and extracted features from iris image by convolution with 1D LogGabor filter. 1D LogGabor filter can be calculated as follows:
In this method, pixels data multiplied by filters and coefficients are generated by integrating them over their support domain. The image texture information is extracted and encoded to mark the corrupted bits in the template by its associated noise mask of feature template.
3.3. Matching
Iris recognition involves mapping between two iris codes. A dissimilarity score is calculated to characterize the degree of resemblance or not between two iris code. The matching algorithm consists of score which is calculated by means of the hamming distance by (22). This distance is expressed by the XOR operator noted and the logical operator AND noted . This similarity score represents the number of unmasked bits different between the two iris codes normalized by the number of unmasked bits common to both iris codes.where and are two codes computed from two iris images by the method previously described and and represent their associated masks. Literally, the hamming distance calculates the number of different and valid bits for both iris images between and , which contain template bits and mask bits, respectively. The total number of comparisons is represented by . The lower the hamming distance is, the more similar the two codes are. A distance 0 corresponds to a perfect match between the two images, whereas two images of different people will have a hamming distance close to 0.5.
4. Experimental Results and Discussion
4.1. Dataset
To evaluate the performance, we tested the proposed schemes on the CASIA iris image database (version 1.0) including 756 images of 108 people. For each person, 7 images were acquired in two separate sessions. Totally, pairs of comparisons for each algorithm, 2268 for intraclass comparisons, and 283122 for interclass comparisons, which contain 9600 template bits and also for mask bits, respectively. The resolution of CASIAv1 images is .
The experimental results are evaluated based on parameters such as False Acceptance Rate (FAR), False Rejection Rate (FRR), Equal Error Rate (EER), Receiver Operating Curve (ROC), and different downsampling factors and different patches (FLDA/PCA).The error rate can be minimized by selecting the threshold value on the intersection of FAR and FRR. The system obtains the recognition performance of about EER =0.01.
Figure 8 shows comparisons of Receiver Operating Curve (ROC) and recognition rate for the experiment.
4.2. Intraclass Comparisons
In our work, 2268 intraclass comparisons of iris templates were successfully performed and their hamming distance distribution is shown in Figure 9.
4.3. Interclass Comparisons
In our work, 283122 interclass comparisons of iris templates were executed successfully and the histogram of distribution is shown in Figure 10.
4.4. Intraclass and Interclass Comparisons
Among all the results of intraclass comparisons and interclass comparisons, very few values were seen to overlap. Their combined hamming distance distribution for pairs of comparisons for each algorithm is shown in Figure 11.
Table 1 and Figure 12 show the result of the original image with different downsampling factors and different patches.

Here, we use 756 iris images of the set as highresolution reference images. Then we sampled the images of the iris using a bicubic interpolation by a factor of 2n (e.g., the images are resized to 1/(2n) of the original size), and we use all the sampled iris images. Then we follow the downsampled approach of the techniques and sampled the images due to the lack of the database with lowresolution images and corresponding highresolution reference images. Then we compare our results with bilinear and bicubic interpolation and also for three different contrast enhancement algorithms (imadjust, clahe, and msr) with the images of the iris.
We measure the performance of the algorithm by calculating the PSNR (in dB), MSE, and SSIM values between the original images and the images of different contrast. We also compare the result for different patch sizes corresponding to 1/2, 1/4, 1/6, and 1/8 of the lowresolution image size. Then we declare the size of the patch proportional to the size of the lowresolution images. Here, the size of the patches is the precious parameter. After calculating the PSNR, MSE, and SSIM values, we obtain that the FLDA/PCA method performs better than bilinear or bicubic interpolation even at very low resolution [51]. We can therefore conclude that it is more resilient than reducing the resolution of the image. We also obtain that bilinear and bicubic interpolation have similar performances for the small downsampling factors, but a better performance is obtained at very low resolution. We observe that as the resolution decreases, more artifacts appear in the images.
Table 2 shows the comparisons performance of the proposed method in terms of recognition rate, False Acceptance Rate (FAR), and False Rejection Rate (FRR). The recognition rate of 99.99%, where FAR= 0.16 and FRR = 0.00, shows the success of method.

5. Conclusions
In this paper, we propose a novel iris recognition method by using 1D LogGabor filter and a fusion of FLDA/PCA. Our algorithm is duly tested on a CASIA V1 database of grayscale eye images for the verification of its efficiency. This research offers a robust and fast iris recognition technology through implementing FLDA/PCA method in a new optimized manner with timesaving in performing. Our proposed algorithm is superior to bilinear and bicubic interpolation with a recognition rate of 99.99%.
Data Availability
The dataset used and analyzed during the current study is available on “http://www.cbsr.ia.ac.cn” [36], “CASIA Iris Image Database Version 1.0”, upon reasonable request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Authors’ Contributions
Rachida Tobji contributed to conceptualization, validation, and writing the original draft; Naeem Ayoub contributed to formal analysis and investigation; Rachida Tobji and Naeem Ayoub contributed to methodology; Wu Di contributed to project administration and supervision.
Acknowledgments
The work presented in this paper was supported by the National Natural Science Foundation of China under Grant no. 61370201.
References
 A. K. Jain, “Biometrics,” in Personal Identification in Networked Society, R. M. Bolle and S. Pankanti, Eds., pp. 1–41, Springer US, 2006, https://www.springer.com/gp/book/9780387285399. View at: Google Scholar
 K. W. Bowyer, K. Hollingsworth, and P. J. Flynn, “Image understanding for iris biometrics: a survey,” Computer Vision and Image Understanding, vol. 110, no. 2, pp. 281–307, 2008. View at: Publisher Site  Google Scholar
 J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1148–1161, 1993. View at: Publisher Site  Google Scholar
 S. Hsieh, Y. Li, and C. Tien, “Test of the practicality and feasibility of EDoFempowered image sensors for longrange biometrics,” Sensors, vol. 16, no. 12, p. 1994, 2016. View at: Publisher Site  Google Scholar
 D. L. Woodard, S. Pundlik, P. Miller et al., “On the fusion of periocular and iris biometrics in nonideal imagery,” in Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, Aug 2010. View at: Google Scholar
 J. Daugman, “New methods in iris recognition,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 5, pp. 1167–1175, 2007. View at: Publisher Site  Google Scholar
 A. Ross and S. Shah, “Segmenting nonideal irises using geodesic active contours,” in Proceedings of the 2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference, pp. 1–6, MD, USA, September 2006. View at: Publisher Site  Google Scholar
 “Biometrics history,” In National Science and Technology Council, Subcommittee on Biometrics, http://www.biometrics.gov/, 2006. View at: Google Scholar
 L. Flom and A. Safir, “Iris recognition system,” US Patent,4,641,349, February 1987. View at: Google Scholar
 “Historical timeline,” In Iridian Technologies, http://www.iridiantech.com/about.php, 2003. View at: Google Scholar
 J. Daugman, “Biometric personal identification system based on iris analysis,” US Patent 5291,560, pp. 9758887, March, 1994. View at: Google Scholar
 C. D. Uchida, E. R. Maguire, S. E. Solomon et al., “Evaluating the use of iris recognition technology in plumsted township,” New Jersey, 20022003: Version 1, MI: InterUniversity Consortium for Political And Social Research, Article ID 208127, 2013. View at: Google Scholar
 http://www.cl.cam.ac.uk/~jgd1000/afghan.html.
 R. P. Wildes, J. C. Asmuth, G. L. Green et al., “A machinevision system for iris recognition,” Machine Vision and Applications, vol. 9, no. 1, pp. 1–8, 1996. View at: Publisher Site  Google Scholar
 S. L. Lim, K. L. Lee, O. B. Byeon et al., “Efficient iris recognition through improvement of feature vector and classifier,” ETRI Journal, vol. 23, no. 2, pp. 61–70, 2001. View at: Publisher Site  Google Scholar
 K. Bae, S. Noh, J. Kim et al., “Iris feature extraction using independent component analysis,” in Proceedings of the International Conference on Audio and VideoBased Biometric Person Authentication, pp. 838–844, Springer, Berlin, Germany, June 2003. View at: Google Scholar
 A. Sepehr, K. Faez, and A. Asghari, “A fast and accurate iris recognition method using the complex inversion map and 2DPCA,” in Proceedings of the Seventh IEEE/ACIS International Conference on Computer and Information Science (icis 2008), OR, USA, 2008. View at: Google Scholar
 C. WenShiung, CA. Chuan, and S. W. Shih, “Iris recognition using 2DLDA + 2DPCA,” in Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, April 2009. View at: Google Scholar
 X. Lan, S. Zhang, P. C. Yuen, and R. Chellappa, “Learning common and featurespecific patterns: a novel multiplesparserepresentationbased tracker,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 2022–2037, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 X. Lan, M. Y. Xiangyuan, R. Shao, Z. Bineng, C. Pong, and H. Zhou, “Learning modalityconsistency feature templates: a robust rgbinfrared tracking system,” IEEE Transactions on Industrial Electronics, 2019. View at: Google Scholar
 N. Ayoub, Z. Gao, B. Chen, and M. Jian, “A synthetic fusion rule for salient region detection under the framework of dsevidence theory,” Symmetry, vol. 10, no. 6, p. 183, 2018. View at: Publisher Site  Google Scholar
 X. Lan, A. J. Ma, P. C. Yuen, and R. Chellappa, “Joint sparse representation and robust featurelevel fusion for multicue visual tracking,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 5826–5841, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 J. Daugman, “Probing the uniqueness and randomness of iriscodes: results from 200 billion iris pair comparisons,” Proceedings of the IEEE, vol. 94, no. 11, pp. 1927–1934, 2006. View at: Publisher Site  Google Scholar
 Y. Zhu, T. Tan, and Y. Wang, “Biometric personal identification based on iris patterns,” in Proceedings of the 15th International Conference on Pattern Recognition, ICPR2000, vol. 2, September 2000. View at: Google Scholar
 R. P. Wildes, “Iris recognition: an emerging biometrie technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348–1363, 1997. View at: Publisher Site  Google Scholar
 N. Khiari, H. Mahersia, and K. Hamrouni, “Iris recognition using steerable pyramids,” in Proceedings of the 2008 First Workshops on Image Processing Theory, Tools and Applications, Sousse, Tunisie, 2008. View at: Google Scholar
 L. Masek, Recognition of Human Iris Patterns for Biometric Identification, University of Western Australia, Australia, 2002.
 L. Ma, Y. Wang, and T. Tan, “Iris recognition using circular symmetric filters,” Object Recognition Supported by User Interaction for Service Robots, 2002. View at: Google Scholar
 D. De MartinRoche, C. SanchezAvila, and R. SanchezReillo, “Iris recognition for biometric identification using dyadic wavelet transform zerocrossing,” in Proceedings of the IEEE 35th Annual 2001 International Carnahan Conference on Security Technology, pp. 272–277, London, UK, 2001. View at: Google Scholar
 L. Ma, Y. Wang, and T. Tan, “Iris recognition based on multichannel gabor filtering,” in Proceedings of the ACCV2002: The 5th Asian Conference on Computer Vision, pp. 23–25, January 2002. View at: Google Scholar
 T. ChristelLoic, M. Lionel, and T. Lionel, “Person identification technique using human iris recognition,” in Proceedings of the 15th International Conference on Vision Interface, pp. 294–299, 2002. View at: Google Scholar
 H. Rai and A. Yadav, “Iris recognition using combined support vector machine and Hamming distance approach,” Expert Systems with Applications, vol. 41, no. 2, pp. 588–593, 2014. View at: Publisher Site  Google Scholar
 N. F. Soliman, E. Mohamed, and F. Magdi, “Efficient iris localization and recognition,” Optik  International Journal for Light and Electron Optics, pp. 469–475, 2017. View at: Google Scholar
 A. B. Dehkordi and S. A. AbuBakar, “Iris code matching using adaptive Hamming distance,” in Proceedings of the 2015 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), pp. 404–408, Kuala Lumpur, Malaysia, October 2015. View at: Publisher Site  Google Scholar
 Z. Yong and W. Yan, “A fusion iris feature extraction method based on fisher linear discriminant,” in Proceedings of the International Conference on Machine Learning and Cybernetics, 2013. View at: Google Scholar
 CASIA V.1.0 iris image database version one, http://www.cbsr.ia.ac.cn.
 Z. Gao, N. Ayoub, D. Chen, B. Chen, and Z. Lu, “SAMM: surroundedness and absorption markov model based visual saliency detection in images,” IEEE Access, vol. 6, pp. 71422–71434, 2018. View at: Publisher Site  Google Scholar
 L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by characterizing key local variations,” IEEE Transactions on Image Processing, vol. 13, no. 6, pp. 739–750, 2004. View at: Publisher Site  Google Scholar
 K. Berthold and H. Paul, “MIT electrical engineering and computer science,” in Robot Vision, vol. 530, MIT Press, 1986. View at: Google Scholar
 V. K. Madisetti and D. K. Williams, “Digital signal processing handbook on CDROM,” in The Digital Signal Processing Handbook;, p. 1778, MADISETTI Digital Signal Processing, 1997. View at: Google Scholar
 P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 711–720, 1997. View at: Publisher Site  Google Scholar
 N. Ayoub, Z. Gao, D. Chen et al., “Visual saliency detection based on color frequency features under bayesian framework,” KSII Transactions of Internet and Information Systems, vol. 12, no. 2, pp. 676–692, 2018. View at: Google Scholar
 C. W. Tan and A. Kumar, “Unified framework for automated iris segmentation using distantly acquired face images,” IEEE Transactions on Image Processing, vol. 21, no. 9, pp. 4068–4079, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 J. Daugman, “How iris recognition works,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21–30, 2004. View at: Publisher Site  Google Scholar
 N. Othman, B. Dorizzi, and S. GarciaSalicetti, “OSIRIS: an open source iris recognition software,” Pattern Recognition Letters, vol. 82, pp. 124–131, 2016. View at: Publisher Site  Google Scholar
 W. W. Boles and B. Boashash, “A human identification technique using images of the iris and wavelet transform,” IEEE Transactions on Signal Processing, vol. 46, no. 4, pp. 1185–1188, 1998. View at: Publisher Site  Google Scholar
 C. SanchezAvila, R. SanchezReillo, and D. De MartinRoche, “Irisbased biometric recognition using dyadic wavelet transform,” Aerospace and Electronic Systems Magazine, vol. 17, no. 10, pp. 3–6, 2002. View at: Publisher Site  Google Scholar
 R. Szewczyk, K. Grabowski, M. Napieralska et al., “A reliable iris recognition algorithm based on reverse biorthogonal wavelet transform,” Pattern Recognition Letters, vol. 33, no. 8, pp. 1019–1026, 2012. View at: Publisher Site  Google Scholar
 D. M. Monro, S. Rakshit, and D. Zhang, “DCTbased iris recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 4, pp. 586–595, 2007. View at: Publisher Site  Google Scholar
 Y. Peng, J. Li, and X. Ye, “Iris recognition algorithm using modified loggabor filters,” in Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 2006. View at: Google Scholar
 F. AlonsoFernandez, R. A. Farrugia, and J. Bigun, “Very lowresolution iris recognition via Eigenpatch superresolution and matcher fusion,” in Proceedings of the 8th IEEE International Conference on Biometrics Theory, Applications and Systems, BTAS 2016, USA, September 2016. View at: Google Scholar
Copyright
Copyright © 2019 Rachida Tobji et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.