Research Article  Open Access
Hala N. Fathee, Osman N. Ucan, Jassim M. AbdulJabbar, Oguz Bayat, "Efficient Unconstrained Iris Recognition System Based on CCTLike Mask Filter Bank", Mathematical Problems in Engineering, vol. 2019, Article ID 6575019, 10 pages, 2019. https://doi.org/10.1155/2019/6575019
Efficient Unconstrained Iris Recognition System Based on CCTLike Mask Filter Bank
Abstract
In this paper, a new personal identification method based on unconstrained iris recognition is presented. We apply a nontraditional step for feature extraction where a new circular contourlet filter bank is used to capture the iris characteristics. This idea is based on a new geometrical image transform called the circular contourlet transform (CCT). An efficient multilevel and multidirectional contourlet decomposition method is needed to form a reducedlength quantized feature vector with improved performance. The CCT transform provides both multiscale and multioriented analysis of iris features. Circular contourletlike mask filters can be used with shapes just like the 2D circularsupport regions in different scales and directions. A reduced recognition system is realized using a single branch of the whole decomposition bank, highlighting a system realization with lower complexity and fewer computations. In the proposed recognition system, only five out of seven elements of the gray level cooccurrence matrix are required in the creation of the feature vector, which leads to a further reduction in computations. In addition, the highly discriminative frequency regions due to the use of circularsupport decompositions can result in highly accurate feature vectors, reflecting good recognition rates for the proposed system. It is shown that the proposed system has encouraging performance in terms of high recognition rates and a reduced number of elements of the feature vector. This reflects reliable and rapid recognition properties. In addition, some promising characteristics of the system are apparent since it can efficiently be realized with lower computation complexity.
1. Introduction
Over the last two decades, several methods have been developed for iris recognition. Most of those methods were designed for frontal and highquality iris images. Among them, the most widely known is Daugman’s approach, which is still used in most commercialized iris recognition systems [1]. In his method, Daugman transformed the segmented iris image into logpolar coordinates. He located the iris boundaries using a relatively timeconsuming differential operator. Then, he calculated the convolution of complex Gabor filters and the iris image to extract the image features. After that, he evaluated the complex map of phasors and generated a 2048bit iris code to match the iris codes with Hamming distance criteria. Although the Gaborfilterbased methods show a high degree of accuracy, they, nevertheless, require a long computational time [2]. Wildes [3] then analyzed iris textures using a fourlevel Laplacian pyramid. He first used the gradient criterion and the circular Hough transform to locate the iris. Then, the application of the Laplacian operator was used to extract the iris image features to four levels of accuracy.
After the work of Daugman, many other research papers are presented to deal with many challenges of the iris recognition systems. In 2004, Z. Sun et al. [4] applied the zerocrossing wavelet transform to segment what were called blocks of interest (BOIs) in the iris image after normalization. In this work, both statistical and structural classifiers were cascaded for better iris recognition. Unfortunately, the use of more than one classifier requires more time to accomplish the job. Sudha et al. [5, 6] proposed another method using edge maps to extract iris codes and used the Hausdorff distance to evaluate code matching. Although edge maps have some advantages in terms of low storage space, fast transmission, fast processing, and high hardware compatibility, high recognition rates cannot be achieved unless different values of parameters such as block size and partialness are initially studied and fixed.
In 2010, Du et al. [7] proposed a noncooperative iris recognition method in which the iris features are extracted using a Gabor descriptor. The proposed method can work with offangle and lowresolution iris images. The Gabor wavelet is incorporated with scaleinvariant feature transformation (SIFT) for better extraction of features. Both the phase and magnitude of the Gabor wavelet outputs were used to describe feature points locally. Double feature region maps were designed to locally and globally register the feature points. Each subregion in that map was locally adjusted to the dilation, contraction, and deformation. The proposed method showed good performance for frontal and offangle iris matching, rather than in terms of its complex computations.
Recently, in 2011, M. Abdullah et al. [8] proposed a method to integrate iris recognition with smart cards to develop a highsecurity access system. An ordinary Haar wavelet filter was used to extract the features which were stored in a smart card to be compared against a stored data base for authentication. In addition, another algorithm that focused on rapid and accurate iris identification was presented to deal with the occluded eye images are [9]. In its feature extraction step, only the fourth and fifth vertical and diagonal Haar wavelet coefficients were taken to express the characteristic patterns in the iris mapped image. Here also, the performance of the proposed system can be improved using other types of wavelet filters. In addition, wavelet transform lacks the property of multiorientation.
In 2012, an efficient iris recognition algorithm was developed by B. Jain et al. [10] using the Fast Fourier Transform for the calculation of all possible sets of moments. The recognition results were produced under favorable conditions, which reflect the dependency of the method on different illumination circumstances. Also, in 2012, J. M. AbdulJabbar and Z. N. Abdulkader [11] introduced a nontraditional step for feature extraction by applying a new twodimensional (2D) ellipticalsupport wavelet Haar filter bank to capture the iris characteristics. A fivelevel 2D ellipticalsupport wavelet decomposition was needed to form a reduced fixedlength quantized feature vector with improved performance. As a final step for iris matching, the Hamming distance was applied.
More recently, in 2013, K. N. Thanh [12] presented human identification at a distance using iris and face information. In that dissertation, three major challenges to human identification at a distance were addressed, namely, input image resolution, input data quality variation, and unavailability of a part of biometric modalities. Superresolution techniques were adopted to enhance the resolution, resulting in some improvements in the recognition performance of the biometric system. However, estimating the statistics of prior probabilities of the features and noise requires the statistics of noise and the prior probability of highresolution features to be estimated beforehand. Also in 2013, M. S. Khalili and H. Sadjedi [13] presented a method by applying a mask to the iris image to remove the unexpected factors affecting the location of the iris. Then, a Canny edge detector was used to find the exact location of the iris. Distinctive features were extracted via a 2D discrete stationary wavelet transform with Symlet 4 filters. The features obtained from the application of the wavelet were investigated via the implementation of a similarity criterion for the feature selection procedure.
Ideal image acquisition conditions are assumed in most of the abovementioned iris recognition systems. These conditions may include a NearInfrared (NIR) light source for clear iris texture and iris look retrieval. In addition, these conditions may also include stare constraints and close distance from the capturing device. However, when these constraints are relaxed, the recognition accuracy in most systems decreases. Recently, different processing methods for iris images captured in unconstrained environments have been proposed. In 2012, an effective method was proposed by M. Mahlouji and A. Noruzi [14] for the localization of the iris inner and outer boundaries, presenting an iris recognition system for unconstrained environments. In this method, the circular Hough transform was used for the segmentation, and the localization of boundaries between the upper and lower eyelids occluding the iris was performed via application of the linear Hough transform. When compared with other popular iris segmentation methods, a relatively higher precision was obtained for this method with a shorter processing time. A high accuracy rate of 97.50% was achieved when testing the results on CASIA database images. However, the processing time can be further reduced by the reducing feature length and its computation steps.
In addition, in 2012, another method for use in iris recognition systems in an unconstrained environment was proposed by J. M. Colores et al. [15]. This method consists of two stages of iris quality evaluation for improvement of the recognition rate using the Daugman algorithm. Although the equal error rate (EER) value was reduced by about 12.2%, a high processing time for the classical Daugman recognition method was obtained.
Iris images captured in the constrained environment can reflect sufficient information to discriminate them individually. In a noisy environment, any iris recognition system of these types may show a good recognition rate but with degraded performance. In 2013, P. M. Patil [16] presented a study of iris recognition in a lessconstrained environment with different challenges. Different iris recognition methods were reviewed, and a platform for the future development of lessconstrained iris biometric systems was provided.
Noisy iris images can markedly degrade the recognition accuracy by increasing intraindividual variation. To overcome these problems, R. Gupta and A. Kumar [17] proposed in 2013 a segmentation technique to handle iris images captured under lessconstrained conditions. The technique investigated different types of noise, such as iris obstructions and specular reflection, with some error percentage reductions. The kmeans clustering algorithm and circular Hough transform were used to localize the iris boundary. Then, the noisy regions were detected and isolated.
In 2014, N. Kaur and M. Juneja [18] proposed an approach for iris recognition in an unconstrained environment with a technique based on fuzzy Cmean clustering applied as a preprocessing stage for iris segmentation. Classical edge map detection (the Canny edge detector and circular Hough transform) was used with some initially added enhancement stages for greater accuracy. Nevertheless, in this approach, the addition of the two stages of clustering and enhancement increased the complexity of computations.
In addition, in 2014, Y. Chen et al. [19] proposed an improved iris recognition system using three feature selection strategies (the orientation probability distribution function, the magnitude probability distribution function, and a compounded strategy combining the two methods for further selection of optimal subfeatures). A matching method based on weighted subregion matching fusion was applied utilizing particle swarm optimization to accelerate the weight determination of different subregions and to match their scores and generate the final decision. Databases of the types CASIAV3 Interval, Lamp, and MMUV1 were tested, resulting in high recognition rates. However, the process of generating the final decision may cause additional computation complexity.
Recently, in 2014, many methods were proposed by M. AlRifaee [20] to process iris images captured in unconstrained environments. The accuracy of the iris recognition systems was improved. Nevertheless, they still have some problems in the segmentation and feature selection stages, resulting in high false rejection rate (FRR) and false acceptance rate (FAR) or even in recognition failure.
In 2016, Y. Fei et al. [21] proposed a performance improvement method for unconstrained iris recognition in different environments based on domain adaptation metric learning solved by kernel matrix calculations. The optimization of the learning constraints in the process of iris recognition was performed using the intraclass/interclass Hamming distance. The distance between two iris samples was redefined after computing an optimal Mahalanobis matrix for a certain crossenvironment system. The results indicated that this method increased the accuracy of unconstrained iris recognition in different circumstances, highlighting an improvement in the classification ability of iris recognition systems.
The remainder of the paper is organized as follows: A general description of the proposed method is given in Section 2. In Section 3, the applied iris segmentation step is explained. Section 4 contains the modified step of feature vector extraction and coding. Also, in this section, the applied reduced CCTlike mask filter bank is described, and the results are shown. A binary coded feature vector of 72bit length is also created from the outputs of five elements of the gray level cooccurrence matrix (GLCM), instead of seven. Finally, Section 5 concludes this paper.
2. The Proposed Iris Recognition Method
The proposed iris recognition method is a modified version of the classical iris recognition method, which usually consists of four steps: segmentation, normalization, feature extraction and coding, and matching. A new identification method is achieved by applying a nontraditional step for feature extraction (where a new filter bank of CCT filters is used to capture the iris characteristics). It is known that 2D filters are widely used for various types of image processing and analysis. Although the classical 2D discrete wavelet transform (2D DWT) is known to be a powerful tool in many image processing applications such as compression, noise removal, image edge enhancement, and feature extraction, 2D DWT is not optimal for capturing the 2D singularities found in images and often required in many segmentation and compression applications. In particular, natural images consist of edges that are smooth curves, which cannot be captured efficiently by the classical 2D wavelet transform. Thus, more effective methods for 2D singularitycapturing transforms have been developed. One of the most recent transformation methods is the 2D ellipticalsupport wavelet transform (2D ESWT), which was efficiently used as the main feature extractor in a recent method for iris recognition [11].
Another attempt at a geometrical image transform is the 2D circularsupport wavelet transform (2D CSWT) [24] which can efficiently represent images using circular split 2D spectral schemes (circularly decomposed frequency subspaces). The designed 2D filters using circularsupport schemes can function as a 2D wavelet filter bank. One of the main benefits of circularsupport schemes is that they can possess better performance than rectangularsupport schemes when it is desired to extract as much lowfrequency information as possible in a 2D lowpass filtering channel (or as much highfrequency information as possible in a 2D highpass filtering channel) [25]. It was shown in [24] that the 2D circularsupport decomposition scheme can effectively improve the operation of extracting both approximation and detail coefficients from original images using 2D circular filters instead of using the traditional twostage 1D filter decomposition in the classical 2D discrete wavelet transform. Therefore, it is believed that a new type of multiscale decomposition can be achieved in this paper using 2D circularsupport frequency decomposition regions, which can serve as highly discriminated regions, while the iris features will accurately represent all iris image contents at the right scale. The filters of the 2D circularsupport wavelet transform can be simply realized by frequencymasking filters with shapes just like the 2D circularsupport regions in different scales. This reflects simplicity in realizing the processing system in addition to its accuracy.
Nevertheless, wavelets may not be the best choice for representing natural images (such as those of the iris). This is due to the fact that wavelets are blind to the smoothness along the edges commonly found in images and always lack texture orientation information. Hence, the use of the contourlet transform [26, 27] is preferred in this paper to identify iris features as it offers both the important properties of anisotropy scaling and directionality. The contourlet transform is often used in image processing (such as image denoising, compression, etc.) [28–31]. In the classical contourlet transform (CT), a doublestage filter bank structure is usually considered: a stage of subband decomposition followed by a directional transform. A Laplacian pyramid (LP) is employed for the first stage, while directional filter banks (DFBs) are used in the second stage of angular decomposition. A comparison between the wavelet scheme and the contourlet shows the improved edge contours due to both the multiscale and multiorientation decompositions of the latter [27].
In our proposed iris recognition system, the CCT is used in the feature extraction stage of the iris recognition system. The modifications in the proposed iris recognition system aim to obtain a reducedlength bestfit code feature vector with high recognition rate. Reducing the length of the iris code can significantly reduce the required processing time, which is considered as important feature in dealing with online realworld applications. The steps of the proposed system are described in the next sections.
3. Iris Segmentation Step
In the iris recognition operation, the first step is to separate the area of the iris. Two rings can isolate the iris area: the first ring is for the pupil (inside the iris boundary) and the second one is for the sclera (outside the iris boundary) [32]. After that, the noise regions should be removed from the segmented region. The main steps start by localizing the candidate region of the iris or directly localizing it using a certain method. This can be done by using kmeans clustering, as described in [33]. Then, the unimportant parts (such as the pupil, the slice outside the iris, eyelids, eyelashes, and skin) should be omitted [34]. The iris area is usually enclosed by the topdown limitations (i.e., eyelids and eyelashes). The detection of the inner boundary circle and outer boundary circle can be simply accomplished using Canny edge detection [35] followed by a Hough transform method. The Hough transform is a standard computer vision algorithm that can be used to determine the simple geometric objects existing in an image, such as lines and circles [36].
Determining the iris area is the first significant source of inaccuracy in iris segmentation. Iris detection errors are caused by high local contrast reflecting some noniris regions. These noniris regions (the sources of segmentation errors) may include the eyebrow, eyelashes, frame of glasses, and white regions that are caused by luminance on skin behind the eye region. Thus, to avoid such segmentation errors, unimportant noniris areas must be perfectly excluded before starting the segmentation step (as in Figure 1). In order to avoid errors as well as to reduce the processing time of the segmentation step, the iris image must be segmented correctly into three areas: the skin, iris, and sclera areas [33].
In some other work [37], iris recognition is instead called “localization”. The iris is localized, and the unimportant parts (e.g., eyelid, pupil, etc.) are removed from the original image. Localization of the iris determines an annular portion between the pupil (inner boundary) and the sclera (outer boundary). Both the inner and outer boundaries of a typical iris can be approximated by two circles. Note that the used segmentation algorithm can handle the occluded iris images also that exist in the UBIRIS v1 database, but it sometimes fails when the occlusion is very big (covering more than 50% of the iris region). Therefore, the iris images with big occlusion are excluded from the testing process since it decreases the iris bit code size, which significantly reduces the recognition performance.
4. Feature Vector Extraction and Coding Step
As mentioned before, the CCT is applied in the feature extraction stage of this iris recognition system. A simple frequencymasking filter bank can be used to realize the CCT with circular contourletlike shapes simulating the CCT’s two filtering stages of multiscale and multidirectional decompositions. This reflects the simplicity of realization of the processing system. In addition, the new type of multiscale decomposition can be achieved by the use of circularsupport decomposition regions, which can serve as highly discriminative frequency regions, giving some accurate features of the iris image by the correct representation of all its contents at the right scale and the right orientation. This can result in high accuracy values for the proposed less complex iris recognition system.
In the previous section, iris segmentation with denoising and normalization stages was detailed. By performing denoising, the unimportant black pixels of the segmented image were deleted in order to reduce the number of processed pixels, thus increasing the processing speed. In this section, feature extraction and coding is discussed. A 2D FFT is applied on the normalized image. A block diagram of the proposed system is shown in Figure 2. The timefrequency conversion in Figure 2 is accomplished by using a 2D FFT after the normalization step. Before that, in the normalization step itself, the wellknown rubber sheet model was applied to convert the isolated radial area into a rectangular area.
As shown in Figure 2, eight filters with frequency masks (FM 1–FM 8) were multiplied with the 2D FFT of the image; thus, eight resulting components were obtained. For each component, seven features were calculated by using the following formulas of the GLCM [38].
The physical values in the formulas (1)–(7) are usually calculated for each of the eight outputs of all circular multiscale and multiorientation mask filters. Instead, in this paper, the following algorithms represent both processes of the iris image feature extraction and bitcoding of features.
(A) Algorithm for Constructing Features from an Iris Image(1)Read image after completing the segmentation step.(2)Remove the boundary around the iris.(3)Convert iris from polar coordinates to rectangular coordinates.(4)Perform the 2D Fast Fourier Transform (FFT).(5)Perform the FFT shift to center all the frequencies on the 2D spectrum sheet.(6)Convert filter masks into binary.(7)Multiply (pixelbased) the 2D spectrum sheet of the iris image by the filter mask.(8)Apply the inverse FFT shift.(9)Perform the inverse Fourier transform.(10)Execute a feature extraction step.
Figure 3 shows the flow chart for constructing features from iris images. Many tests were performed in order to choose a good combination (concatenation) of useful outputs of different filter masks among the group of eight filter masks shown in Figure 2. Fortunately, filter mask FM 4 is the only one that displayed the best feature characteristics. Figure 4 shows the block diagram of a reducedcomplexity system using the single filter mask FM 4, resulting in a single component with only five required elements, rather than all seven elements of the GLCM, described by (1)–(7). These elements are energy, autocorrelation, dissymmetry, inertia, and contrast, as shown in Figure 4. The five outputs from these elements are coded and concatenated to form the final feature vector representing the input iris image characteristics.
(B) Algorithm for BinaryBitCoding and Feature Vector Creation(1)Read all elements.(2)Normalize all element values to match 10bit codes.(3)Remove the least significant bit (LSB) from all coded features to form reducedlength features.(4)Concatenate the reducedlength feature bits resulting from using five out of the seven elements of the calculated GLCM matrix. The elements are(i)Energy (repeated twice)(ii)Autocorrelation (repeated twice)(iii)Dissymmetry(iv)Inertia(v)Contrast (repeated twice).(5)Generate a final feature vector with code length (95) + (93) = 72 bits.
Figure 5 shows the flow chart for binarybitcoding and feature vector creation with reduced length.
Over many experiments, what code number is better for greater accuracy was tested while keeping the vector length as short as possible. To accomplish this, tests were performed on the iris images of the UBIRIS V1 dataset [33]. Some samples of results before the standard normalization of values are shown in Table 1, while Table 2 shows the same results after the standard normalization of all values. Table 3 shows the decimal values corresponding to 10bit implementation (from 0 to 1023) of the normalized values from Table 2.



From Table 3, it appears better to repeat the bits of energy twice, autocorrelation, and contrast, resulting in a final 72bitlength feature vector. Table 4 shows the 72bit codes resulting from concatenating individual codes representing the decimal values of energy (repeated twice), autocorrelation (repeated twice), dissimilarity, inertia, and contrast (repeated twice) for each sample of UBIRIS V1 images in Table 3.

The Hamming distance (HD) matching algorithm was used to simulate the iris feature vector matching. This matching algorithm is shown by the flow chart in Figure 6.
In Figure 6, the inter and intraclass Hamming distances (HDs) were computed between the feature vectors of the input irises and the feature vectors of the database irises for 2000 iris images. Figure 7 shows both the interclass and intraclass distributions, appearing with a threshold = 0.46. For the recognition phase, the following rules were applied:
(a) Interclass distribution
(b) Intraclass distribution
If , then the iris is recognized and image is mapped to the database.
If , then the iris is not recognized.
A comparative study between different methods of iris recognition is summarized in Table 5. This comparison is based on the kind of utilized dataset, the applied method for feature extraction, the resulting feature vector length (in bits), the type of matching algorithm, and the achieved percentage rate of accuracy.
 
SVM stands for Support Vector Machines. 
From Table 5, it can be seen that the resulting recognition rate of the proposed method is of the same order compared to those of the other illustrated methods, whereas the system is superior in the sense of the feature vector length (72 bits), reflecting a rapid recognition process. In addition, a single subband out of eight is needed in constructing the CCT, while five out of seven elements of the GLCM are sufficient for feature vector creation. Both reductions imply a less complex recognition system with fewer computations. In addition, an important thing to remember is that the tested UBIRIS V1 dataset is of the type containing iris images captured in an unconstrained environment, which means that the proposed system is an efficient unconstrained iris recognition system.
5. Conclusions
An efficient unconstrained iris recognition system has been proposed in this paper using the circular contourlet transform (CCT) to extract 2D anisotropic oriented features from degraded iris templates rather than using classical methods based on textural analysis wavelet transform or even the classical contourlet transform (CT). Multiscale characteristics are usually noticed in iris images due to the improper alignment of eyes in front of cameras, while multiphase characteristics are noticed from the differently oriented curves in iris images. CCT subband decomposition has been proposed because of the multiscale and multidirectional properties of the iris feature vectors. CCT decomposition has been applied to UBIRIS V1 iris images (the iris image is treated as a pattern that is rich with multiphase and multiscale characteristics). While performing matching, the resulting feature vectors has been proved to give a comparable recognition rate with rapid recognition, since only a reduced feature vector length of 72 bits has been required. In the phase of constructing the CCT filter bank, a single subband out of eight has been considered enough to cover most feature characteristics of iris images, while only five out of seven elements of the GLCM have been required for feature vector creation. Both reductions result in a less complex recognition system with fewer computations. This reflects a reducedcomplexity implementation of the whole recognition system. Also, the required CCT subband filter has been easily realized by masks, again highlighting a greater reduction in the realization requirements of the whole recognition system. Finally, in the process of simplifying the whole system, it has been noticed that due to the multiscale characteristics of the CCT, no highly accurate segmentation process is needed.
Data Availability
The UBIRIS v1 database that used to support the findings of this study has been deposited in the UBIRIS repository (http://iris.di.ubi.pt/index_arquivos/Page374.html).
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
 J. Daugman, “Biometric Personal Identification System Based on Iris Analysis,” U.S. Patent (5), 291 560, 1994. View at: Google Scholar
 M. S. Khalili and H. Sadjedi, “A robust iris recognition method on adverse conditions,” International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), vol. 3, no. 5, pp. 33–48, 2013. View at: Publisher Site  Google Scholar
 R. P. Wildes, “Iris recognition: an emerging biometrie technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348–1363, 1997. View at: Publisher Site  Google Scholar
 Z. Sun, Y. Wang, T. Tan, and J. Cui, “Cascading statistical and structural classifiers for IRIS recognition,” in Proceedings of ICIP, pp. 1261–1264, 2004. View at: Google Scholar
 N. Sudha, N. B. Puhan, H. Xia, and X. Jiang, “Iris recognition on edge maps,” in Proceedings of the 2007 6th International Conference on Information, Communications and Signal Processing, Singapore, 2007. View at: Google Scholar
 N. Sudha, N. B. Puhan, H. Xia, and X. Jiang, “Iris recognition on edge maps,” IET Computer Vision, vol. 3, no. 1, pp. 1–7, 2009. View at: Publisher Site  Google Scholar
 Y. Du, C. Belcher, and Z. Zhou, “Scale invariant gabor descriptorbased noncooperative iris recognition,” EURASIP Journal on Advances in Signal Processing, vol. 2010, Article ID 936512, 13 pages, 2010. View at: Publisher Site  Google Scholar
 M. A. Abdullah, F. H. AlDulaimi, W. AlNuaimy, and A. AlAtaby, “Smart card with iris recognition for high security access environment,” in Proceeding of The First Middle East Conference on Biomedical Engineering (MECBME’11), Sharjah, UAE, 2011. View at: Publisher Site  Google Scholar
 V. Roselin.E.Chirchi, L. M. Waghmare, and E. R. Chirchi, “Iris biometric recognition for person identification in security systems,” International Journal of Computer Applications, vol. 24, no. 9, pp. 1–6, 2011. View at: Publisher Site  Google Scholar
 B. Jain, M. K. Gupta, and J. Y. Bharti, “Efficient iris recognition algorithm using method of moments,” International Journal of Artificial Intelligence & Applications (IJAIA), vol. 3, no. 5, pp. 93–105, 2012. View at: Google Scholar
 J. M. AbdulJabbar and Z. N. Abdulkader, “Iris recognition using 2D ellipticalsupport wavelet filter bank,” in Proceedings of the 3rd International Conference on Image Processing Theory, Tools & Applications (IPTA 2012, 1518 October, 2012), pp. 359–363, Istanbul Aydin University, IEEE Xplore Digital Library, pp., Istanbul, Turkey, 2012. View at: Google Scholar
 N. Thanh, Human identification at a distance using iris and face [Ph.D. Thesis], Queensland University of Technology, Image and Video Research Laboratory, Science and Engineering Faculty, 2013.
 M. S. Khalili and H. Sadjedi, “A robust iris recognition method on adverse conditions,” International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), vol. 3, no. 5, pp. 33–49, 2013. View at: Publisher Site  Google Scholar
 M. Mahlouji and A. Noruzi, “Human iris segmentation for iris recognition in unconstrained environments,” International Journal of Computer Science Issues (IJCSI), vol. 1, pp. 149–155, 2012. View at: Google Scholar
 J. M. ColoresVargas, M. GarcíaVázquez, A. RamírezAcosta, H. PérezMeana, and M. NakanoMiyatake, “Video images fusion to improve iris recognition accuracy in unconstrained environments,” in Proceedings of the Mexican Conference on Pattern Recognition, pp. 114–125, Springer, Berlin, Germany, June 2013, vol. 7, no. 35, pp. 31143127, 2012. View at: Google Scholar
 P. M. Patil, “Iris recognition in less constrained environment,” International Journal of Emerging Technology and Advanced Engineering, vol. 3, no. 7, pp. 196–200, 2013. View at: Google Scholar
 R. Gupta and A. Kumar, “An effective segmentation technique for noisy iris images,” International Journal of Application or Innovation in Engineering & Management (IJAIEM), vol. 2, no. 12, pp. 118–125, 2013. View at: Google Scholar
 N. Kaur and M. Juneja, “A novel approach for Iris recognition in unconstrained environment,” Journal of Emerging Technologies in Web Intelligence, vol. 6, no. 2, pp. 243–246, 2014. View at: Google Scholar
 Y. Chen, Y. Liu, X. Zhu, F. He, H. Wang, and N. Deng, “Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion,” Scientific World Journal, vol. 2014, Article ID 157173, 19 pages, 2014. View at: Publisher Site  Google Scholar
 M. AlRifaee, Unconstrained Iris Recognition [Ph.D thesis], School of Engineering and Sustainable Development, De Montfort University, UK, 2014.
 Y. Fei, Z. Changjiu, and T. Yantao, “Improving unconstrained iris recognition performance via domain adaptation metric learning method,” International Journal of Security and Its Applications, vol. 10, no. 5, pp. 27–40, 2016. View at: Google Scholar
 M. Dobeš, J. Martinek, D. Skoupil, Z. Dobešová, and J. Pospíšil, “Human eye localization using the modified Hough transform,” Optik  International Journal for Light and Electron Optics, vol. 117, no. 10, pp. 468–473, 2006. View at: Publisher Site  Google Scholar
 S. Khalighi, P. Tirdad, F. Pak, and U. Nunes, “Shift and rotation invariant iris feature extraction based on nonsubsampled contourlet transform and GLCM,” in Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods, ICPRAM 2012, 2012. View at: Google Scholar
 A. A. Dawood, Z. Talal Abede, and J. M. AbdulJabbar, “A multiplierless implementation of twodimensional circularsupport wavelet transform on FPGA,” Iraqi Journal for Electrical And Electronic Engineering, vol. 9, no. 1, pp. 16–28, 2013. View at: Publisher Site  Google Scholar
 J. M. AbdulJabbar and H. N. Fathee, “Design and realization of circular contourlet transform,” AlRafidain Engineering Journal, vol. 18, no. 4, pp. 28–42, 2010. View at: Google Scholar
 M. Do and M. Vetterli, “Contourlets: a directional multiresolution image representation,” in Proceedings of the ICIP 2002 International Conference on Image Processing, pp. 357–360, Rochester, NY, USA. View at: Publisher Site  Google Scholar
 M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005. View at: Publisher Site  Google Scholar
 S. Satheesh and K. Prasad, “Medical image denoising using adaptive threshold based on contourlet transform,” Advanced Computing: An International Journal ( ACIJ ), vol. 2, no. 2, pp. 52–58, 2011. View at: Publisher Site  Google Scholar
 G. Liu, J. Liu, Q. Wang, and W. He, “The translation invariant waveletbased contourlet transform for image denoising,” Journal of Multimedia, vol. 7, no. 3, pp. 254–261, 2012. View at: Google Scholar
 K. Sakthivel, “Contourlet based image denoising using new threshold function,” in Proceedings of International Conference on Global Innovations In Computing Technology (ICGICT’14), pp. 67, Tamilnadu, India, 2014. View at: Google Scholar
 J. M. AbdulJabbar, A. I. Kanaan, and Z. N. Abdulkader, “Contourletbased method for speckle reduction with adaptive estimation of noise level,” AlRafidain Engineering Journal, vol. 22, no. 5, 2014. View at: Google Scholar
 C. Houston and Rochester Institute of Technology, Iris Segmentation and Recognition Using Circular Hough Transform and Wavelet Features, 2010, https://www.cis.rit.edu/.../(2010)%20Caroline%20Houston%20%20I.
 S. A. Sahmoud and I. S. Abuhaiba, “Efficient iris segmentation method in unconstrained environments,” Pattern Recognition, vol. 46, no. 12, pp. 3174–3185, 2013. View at: Publisher Site  Google Scholar
 H. Proenca and L. A. Alexandre, “UBIRIS: A noisy iris image database,” in Proceedings of the Proceedings of the 13th International Conference on Image Analysis and Processing (ICIAP2005), pp. 970–977, Cagliari, Italy, 2005. View at: Publisher Site  Google Scholar
 I. J. Shaikh and A. H. A. R. Shaikh, “Iris localization using segmentation & hough transform method,” International Journal of Scientific & Engineering Research, vol. 5, no. 2, 2014. View at: Google Scholar
 S. Singh and S. Singh, “Iris segmentation along with noise detection using hough transform,” International Journal of Engineering and Technical Research (IJETR), vol. 3, no. 5, 2015. View at: Google Scholar
 L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by characterizing key local variations,” IEEE Transactions on Image Processing, vol. 13, no. 6, 2004. View at: Google Scholar
 A. Azizi and H. R. Pourreza, “Efficient IRIS recognition through improvement of feature extraction and subset selection,” International Journal of Computer Science and Information Security (IJCSIS), vol. 2, no. 1, 2009. View at: Google Scholar
Copyright
Copyright © 2019 Hala N. Fathee et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.