Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 9830672 | https://doi.org/10.1155/2020/9830672

Belhassen Akrout, Sana Fakhfakh, "Three-Dimensional Head-Pose Estimation for Smart Iris Recognition from a Calibrated Camera", Mathematical Problems in Engineering, vol. 2020, Article ID 9830672, 14 pages, 2020. https://doi.org/10.1155/2020/9830672

Three-Dimensional Head-Pose Estimation for Smart Iris Recognition from a Calibrated Camera

Academic Editor: Saeed Eftekhar Azam
Received18 May 2020
Accepted26 Jun 2020
Published18 Jul 2020

Abstract

Current research in biometrics aims to develop high-performance tools, which would make it possible to better extract the traits specific to each individual and to grasp their discriminating characteristics. This research is based on high-level analyses of images, captured from the candidate to identify, for a better understanding and interpretation of these signals. Several biometric identification systems exist. The recognition systems based on the iris have many advantages and they are among the most reliable. In this paper, we propose a new approach based on biometric iris authentication. A new scheme was made in this work that consists of calculating a three-dimensional head pose to capture a good iris image from a video sequence which affects the identification results. From this image, we were able to locate the iris and analyse its texture by intelligent use of Meyer wavelets. Our approach was evaluated and approved through two databases CASIA Iris Distance and MiraclHB. The comparative study showed its effectiveness compared to those in the literature.

1. Introduction

Computer-network security and access control to physical sites are becoming increasingly important to businesses and nations. This implies the need to verify the identity of individuals, and several methods are used to determine identities. A solution to this problem consists of taking into account physiological or behavioural elements that are unique and specific to each individual to ascertain their identity. These are identification techniques based on biometrics and iris recognition in particular, which remains the most reliable in the biometrics field.

A system of iris recognition includes five steps: (i) iris acquisition; (ii) iris localisation; (iii) iris normalisation; (iv) feature extraction and encoding; and (v) matching for an acceptance or rejection decision.

Iris localisation is a fundamental step for the good progress of the treatment that follows, namely, iris normalisation, information extraction, and signature comparison. Reliable and rapid determination of external and internal iris contours is key to successfully completing the identification steps. This justifies the growing interest of several studies, where we can distinguish three techniques. These techniques are principally based on edge-detection methods such as integrodifferential operators [14], multiscale edge-detection [47], and circular or elliptical Hough transformation [7, 8]. A reliable iris recognition system requires obtaining a uniform model to describe any normalised iris in the database. This model should compensate for all kinds of geometric and colourimetric deformations that an iris can represent. These distortions are often due to different acquisition conditions, the subject’s attitude, and the behaviour of the iris towards its environment. Valerian and Derrode [9] used a video-based iris recognition approach that made it possible to extract a pattern by combining the images of the iris detected in real time. The authors used the average calculation of a quality score inspired by the concentration score of Kang and Park’s metric [10]. In order to optimise this approach, the authors carry out a step of monitoring the pupil with the Kalman filter [11] in real time. Claudio et al. [12] could detect the iris on the face in the coronal axis with a range of 40 and from a video. This approach was based on anthropometric models to locate the eyes and faces of a moving head. In order to detect the iris, the authors depended on its circular shape and its difference compared to the sclerotic region. Mohamed et al. [13] detected faces and eyes using the Viola–Jones method [14] from a video. The images received luminance normalisation in order to refine the face-localisation stage in real time. The authors used a circular Hough transform to detect the irises of each subject.

The key problems in an iris pattern are feature extraction and efficient coding by using texture analysis [1518]. The aim of this step is to extract a descriptive and discriminating biometric signature. Nowadays, research on this axis is very numerous. Daugman [1, 2] was the first to publish research based on iris texture analysis. He was using a Gabor wavelet demodulation to construct an iridian-signature [14]. The Gabor wavelet has been exploited by several researchers [1719]. Boles [20, 21] introduced a new technique based on one-dimensional wavelet transform. The proposed approach defined a set of m virtual circles of the pupil centre to extract m signals characteristic of the iris relief. Each data vector was then subjected to wavelet transform with zero crossings, and it was considered as a signature. The use of multiresolution decomposition called the Laplacian pyramid, which represents the different spatial characteristics of the iris, was proposed by Wildes [7, 8]. Their approach consisted of aligning the iris to be identified with any iris in the database before pairing them by calculating similarity indices. He applied the Laplacian and Gaussian filter to each of the two aligned images according to a pyramidal procedure. This method was used by many authors [2224]. Rossant et al. [25, 26] proposed an approach that consisted of analysing the texture of the iris by orthogonal- or biorthogonal-wavelet decomposition with three levels. Multiscale analysis of the iris texture using Mallat wavelet [5] with a Gabor filter bank was exploited by Nabti et al. [6] for iridian-signature extraction.

Analysis of existing work in the literature clearly showed that the geometric normalisation of an extracted iris is essential. This problem has been resolved in several ways. In [4], the author solved the problem of lateral iris displacements and elastic deformations by linear unfolding. The works of [2729] brought localised eyes back to a single reference angle in order to solve the problems of iris rotation. Claudio et al. [12] successfully detected the iris in its coronal axis in order to remedy errors of iris location in different positions. Other works mainly remedied problems of poor light distribution and focused on contrast correction [13, 30]. We concentrated on carrying out an approach that guides the user to the correct head position during the acquisition phase in order to refine the stage of signature extraction.

In this context, this paper is organised as follows: in Section 2, we describe our new authentication scheme and detail our proposed approaches. Section 3 defines the estimation of similarities between the extracted codes. Experiment results using data and benchmark databases of iris are presented and discussed in Section 4. Finally, Section 5 concludes the paper and presents some perspectives.

2. Proposed Method

Our new approach consists of identifying people by iris recognition. Two major contributions are suggested in our system. The first contribution was to locate the iris images by a noninvasive method. This method is based on capturing an eye image by rectifying the initial position of the user’s head. This correction was calculated by the proposal of a new method that calculates head rotation angles from three interest points. This approach allows us to alert the user to rectify their position in front of the camera. Our second contribution was to cleverly use Meyer wavelets. This contribution allowed extracting an iris signature and improving the authentication results. These two contributions are detailed in the rest of the paper with the proposition of a new iris-authentication scheme as shown in Figure 1.

2.1. Three-Dimensional Head-Pose Estimation

The detection of a 3D head pose when capturing an iris image is a crucial authentication step in our approach. This step represents our first contribution to our system. We were interested in calculating the three angles of head rotation (yaw, pitch, and roll) to capture the image of the iris in a standard position for all users. This step provides a system that is not sensitive to rotation or scale changes. This step has three stages. First, we detected the box, including the nose and eyes, by the Haar-like feature method [14]. Second, we located the interest points to calculate the angle rotations. These interest points present the centre of boxes, including the nose (point in Figure 2) and the eyes (points and in Figure 2).

The Viola–Jones technique [14] is a widely used classifier that is based on Haar descriptors. This technique has the advantage of being executed very quickly, and its results are no longer disassembled [14]. The Viola–Jones method is based on descriptors, not taking into account the differences in the image in video sequences or the colour of pixels in colour images. This explains its robustness in relation to noise that can affect the results of object detection. The use of integral images also makes it possible to detect the eyes and the nose while taking into account scale-level changes. The choice of interest points such as the centres of the eyes and the nose tip was already justified by the MPEG-4 standard [31]. For estimation of head rotation angles in the three directions of yaw, pitch, and roll, three characteristic points of the MPEG-4 standard attracted our attention. These three points were chosen because, apart from the fact that they respect the bony structure of the face, the distance between them is invariant in the face of changes in direction of view and especially facial expressions. Figure 2 presents our perspective model to calculate the head rotation, knowing that the subject was sitting in front of a calibrated camera after calculating focal distance.

Let and be the points of the eye centres, with being the centre of . is the distance between points and ; is the distance between point and the projection of point on the horizontal axis that goes through point , as shown in Figure 3. is calculated by

Distance was determined by

We could then calculate angle of the axis () as shown in the following equation:

After calculating the angle , we determined the angle of the axis with a new method based on perspective projection. Suppose is halfway between the eyes centres in a space (Figure 4). represents the focal length of the calibrated RGB camera. Let and be the projection points of the eye centres of our image plane. We have, by similar triangles of Figure 4,

Establishing an image plane coordinate system aligned with X and Y, we get

We could express and according to the perspective-projection principle by the following equations:

Depending on (8), we obtainedwhich gives

Let us calculate according to (9):

We obtain depending on according to (9):

Since was not known, (11) and (13) were replaced by

We obtained

When was factoring, we had

With ,

represents the following equation:

Distance , the half distance of the eyes, is different from zero. In addition, the denominator of (16) must also be different from zero. We could deduce that

According to (19), we obtained

We could deduce the value of angle ( of axis) according to

The angle that represented of axis essentially depended on the nose tip (point in Figure 5) and eye centres and in the space. Suppose the symmetric of point according to line , such that is perpendicular passing through the middle of . Points and are the projection of the points and , respectively, in the image plane. We have, by similar triangles of Figure 5,

Establishing an image plane coordinate system aligned with X and Y, we get

can be written as follows:

is given according to

can be written as follows:

So,

The difference between (27) and (29) gives

So,

and , so

After the development of (33), angle ( on axis) was calculated as follows:

In the case that rotations exceeded the for the three rotation axes , , and , we would alert the user to maintain their head in the best position. Some results of head-pose estimation of a subject in front of a calibrated camera knowing that the focal distance was equal to 620 pixels are shown in Figure 6.

2.2. Iris Localisation

Du et al. [32] studied the precision of iris recognition. The authors concluded that the experiment results of a more distinct and individually unique signature are found in the inner rings of the iris. The more you cross the limit of the iris, the less useful the extracted signature becomes in determining the identity of a person. In order to guarantee the uniqueness of the signature, we focused on the localisation of the image area that consisted of extracting some characteristic properties (iris texture) and expressing them in parametric form. Useful iris information lies in a crown between the pupil and the sclera.

Locating the iris consists of extracting this zone by determining its internal and external edges. This step is preceded by eye location that was used in Section 2.1 with the Viola–Jones method [14]; the inner contour of the iris represents the pupil. This region of the human eye has the distinction of being the darkest object in the image. Its location is ensured by the peak in the grayscale histogram as shown in Figure 7(d). In order to extract the pupil from its environment, a histogram-threshold technique was calculated [27]. From the histogram, we looked for the index in a simple and fast way. A mathematical morphological opening operation seemed necessary to eliminate the noise (Hairy Eyeball) of the resulting binary image, as shown in Figure 7.

The location of the iris in the image requires the detection of its external contour that can be obtained by detecting elliptical and parabolic shapes. These two shapes are obtained following the use of elliptical or parabolic Hough transform [33] based on parametric models. The elliptical shape of the iris made it necessary to apply the elliptical Hough transform that is based on an approximation of the iris edges by elliptical contours. The upper and lower eyelids, considered as parabolic curves, were detected with the use of parabolic Hough transform. An example of the detection result presenting the outer iris edges, pupil, and eyelids is illustrated in Figure 8.

2.3. Iris Normalisation and Enhancement

The size of the iris differs according to the retraction and dilation effects of the pupil. It was concluded that the size of the iris is not constant. This distortion can affect the result of signature extraction. We then aimed to transform the iris image into a rectangular shape of fixed size (64 × 256 pixels) [4]. The resulting image (Figure 9(b)) had low contrast and unbalanced luminance due to ambient light, which also affected the result of signature extraction and code comparison. To correct this situation and to highlight the texture of the iris, we raised the histogram of the pseudopolar iris using a function of histogram equalisation. This function was intended to change the grey levels to increase the image contrast and highlight the iris texture (Figure 9(c)) that represented the relevant characteristics of the human eye [30].

2.4. Iris-Texture Analysis and Biometric-Signature Extraction Based on Meyer Wavelets

The interpretation of the iris texture is not a simple problem since it is very irregular and cannot be precisely modelled by traditional mathematical techniques. The extracted model should characterise the individual corresponding to their iris. This model is often called a biometric signature. With a view to propose a new approach aiming to differently extract iris parameters, our efforts gave rise to a contribution that consisted of defining a model of the iris represented by well-selected coefficients of Meyer wavelet transform [34]. Following several analyses, we noticed that multiscale Meyer wavelets presented undeniable results. A Meyer wavelet is a Fourier transforms with compact support while being regular. This regularity implies a much faster decrease in time to infinity. The Meyer-scale function and the mother wavelet were analytically defined in the Fourier space. The normalised and enhanced iris image showed a piece of important information in the vertical details of the texture (Figure 9(c)). The application of Meyer wavelets on vertical details allowed us to create folds of the image on all four sides by symmetry as shown in Figure 10.

Multiscale analysis allowed the iris signature to be constructed from the coefficients of the vertical details. Following several practice tests, we decided to only keep the vertical details of the fourth scale. Beyond the fifth level of resolution, details were too dominated by the folding effect. For the construction of the signature, these details were compared with an experimentally calculated threshold to build a binary code. The obtained signature was divided into four parts: top-left (TL), top-right (TR), bottom-left (BL), and bottom-right (BR), as shown in Figure 11. We then split the signature into four parts and separately analysed each one to know the best relevant quadrant. This study is detailed in the experimentation section.

3. Estimation of Similarities between Codes

Once signatures were extracted, the bit-by-bit comparison of each iris code, A and B, was given by the normalisation of Hamming distance [35], defined as shown in (35), with N being the number of the elements of the signature vector:

In the case where the majority of codes and were equal, the two compared codes were equivalent; otherwise, the two codes were different. Generally, if tends to 0, then the individual is authenticated; if tends to 1, the individual is considered as an imposter. Once Hamming distances were calculated, we could then fix the decision threshold in the training stage, which represented the value of the intersection between the curves of the authenticated and the impostors, as shown in Figure 12.

4. Experiment Study and Discussion

Two databases were used to validate our approach, CASIA Iris Distance v 4.0 (CID) [36] and the MiraclHB (MHB) database that was created in our laboratory [37]. The use of our database that was in video form allowed us to test our method of calculating head rotation in order to correctly locate the eyes when a face is in front view. In order to evaluate the proposed method of 3D head-pose estimation, we conducted tests on the MHB database. The proposed approach requires camera calibration to calculate the focal distance, which limited our experiment study on the MHB database. The context with which we were dealing only required overall estimation of the face orientation. In our system, we considered included rotations in interval [−, ] for three axes X, Y, and Z. Our approach made it possible to guide the user to position their head in interval [−, ] for the three axes of rotation. Table 1 shows a comparison of our method with regard to the literature. These mean-absolute-error (MAE) results proved the robustness of our approach under the conditions explained above with respect to methods that use a single sensor (monocular camera).


MAE
YawPitchRoll

Our approach2.043.233.87
María Díaz Barros et al. [38]2.563.393.99
Vicente et al. [39]3.26.24.3

The stage of iris localisation and the eyelids is essential for the signature extraction. This step is preceded by a localisation phase of the person’s face and eyes using the Viola–Jones method. Indeed, the size of the rectangle locating the driver’s eye allows us to estimate the axes of the ellipses located for the iris and the parabola parameters for the eyelids detection by the elliptical or parabolic Hough transform. In this way, we ensure that the method used is not affected by any change of scale conditions. The choice of the Hough Transform is argued by its robustness, its speed, and its resistance to noise. Figures 13 and 14 show a series of iris and eyelids localisation examples from the two databases MHB and CID.

The results are calculated from the average of the correct and false detection rates by the elliptic or parabolic Hough transform according to the following equations:

The results of the Hough transform show satisfactory percentages using the MHB database compared to the use of the CID database. This satisfaction is demonstrated by the average of the percentages of correct detection rates 96.54% for the MHB database and 93.49% for the CID database and false detection rates 0.52% for the MHB database and 0.62% for the CID database like shown in Table 2. However, errors in the iris and eyelids localisation are due to the eyeglasses worn by some subjects for the two databases and strong head rotation for the CID database.


MHBCID
CADFADCADFAD

Elliptic Hough transform97.650.4894.320.56
Parabolic Hough transform95.440.5692.660.69
Average96.540.5293.490.62

As a result of our observation of the extracted signature from Meyer wavelets, we noticed that the obtained code was divided into four parts (Figure 11). We then split the signature into four parts and separately analysed each one by calculating their performance rates using the receiver-operating-characteristic (ROC) curve and the distribution of Hamming distances (HD), respectively, in order to choose the signature that would be the most competitive. The ROC curve made it possible to graphically represent the performance of a verification system for the different threshold values. In our biometric system, it represents the evolution of the false acceptance rate (FAR) as a function of the correct acceptance rate (CAR). An ROC curve must increase from point (0,0) to go to point (1,1). The point of the curve corresponded to the point closest to the upper-left corner that represented the inflexion point of the system that corresponded to the best sensitivity and specificity. This point represented a means of calculating the ROC area curve (A-ROC). A biometric system is better when this value tends towards 1. Table 3 shows the obtained results from the A-ROC of the two used databases.


Signature partA-ROC for MHBA-ROC for CID

TL0.880.85
TR0.890.83
BL0.770.74
BR0.750.71

Figures 15 and 16 show an example of the ROC and HD curves of signatures, respectively, compared using the MHB database.

A study of the false rejection rate (FRR), correct rejection rate (CRR), correct acceptance rate (CAR), and false acceptance rate (FAR) of four signatures, TL, TR, BL, and BR, is demonstrated in Tables 4 and 5, respectively.


Signature partFRRCRRCARFAR

TL0.110.910.890.09
TR0.130.890.870.11
BL0.460.710.540.29
BR0.320.750.680.25


Signature partFRRCRRCARFAR

TL0.130.890.870.14
TR0.170.860.880.12
BL0.530.650.580.32
BR0.360.680.650.33

Following the observation of the performance rates of each signature fragment, we carried out analyses on the TL–TR parts in a single signature. This analysis showed that the use of the TL–TR part was promising and justified, as indicated in Table 6. In fact, the Meyer wavelets allowed the distribution of important information that was the texture of the iris in the TL–TR part and the least important information, which represented the eyelid in the BL–BR part.


Signature partFRRCRRCARFAR

TL–TR with MHB0.0150.9820.9860.0194
TL–TR with CID0.0250.9610.9730.021
Average0.020.97150.97950.0202

The results obtained using the two databases showed that the application of our approach to the MiraclHB database was better. This was due to the sensitivity of capturing the eyes to avoid the problems of scaling, rotation, and translation. Indeed, the horizontal and vertical movements of the iris are due to a rotation of the entire eyeball so that the person can look in different directions. However, these movements must not exceed a well-defined angle with respect to the axis of the straight view [40, 41]. Outside this range, the iris would be twisted. In our case, we propose an approach that allows improving the step of the iris normalisation by guiding the user to fix his head in the desired angles of rotation Yaw, Pitch, and Roll. Indeed, the extraction of two signatures of the same iris which undergoes twists would be different which affects the result of the authentication. Tables 46 justify that the use of our MHB database is better than that the use of CID database and this is due to the refinement of the iris normalisation following the proposal of a new scheme that allows solving the problem of the iris twisting. The average rates of the false rejection rate, correct rejection rate, correct acceptance rate, and false rejection rate in Table 6 also show that our system is promising compared to existing work in the literature (Table 7).


Signature partFRRCRRCARFAR

Our approach0.020.97150.97950.0202
Iglesias et al. [42]0.09NANA0.13
Pradeepa et al. [43]0.0315NANA0.0274

5. Conclusion

In this work, we focused on iris-based authentication. A comprehensive methodology for authenticating people using iris analysis was developed. The proposed approach includes phases of head-pose estimation, iris localisation, normalisation, signature extraction, and a decision phase allowing the detection of authentic individuals and imposters. The experimental results show the promising efficiency and robustness of our method. Indeed, it has been proven that the step of the 3D head-pose estimation fights off some acquisition defects and undesirable elements caused by the strong head rotation. In order to properly situate our proposed iris signature extraction approach, based on the intelligent use of Meyer wavelets in relation to the existing one, an evaluative and comparative study was conducted. In future work, we aim to exploit multimodal approaches video-based or construct a three-dimensional iris texture to improve our biometric systems.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request (https://bit.ly/3fH6rE8).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

All the authors contributed equally to this work.

Acknowledgments

The authors would like to acknowledge the support of the Deanship of Scientific Research at Prince Sattam Bin Abdulaziz University under research project 2019/01/10061. This work was supported by the Deanship of Scientific Research at Prince Sattam Bin Abdulaziz University under research project [2019/01/10061].

References

  1. J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1148–1161, 1993. View at: Publisher Site | Google Scholar
  2. J. Daugman, “Statistical richness of visual phase information: update on recognizing persons by iris patterns,” International Journal of Computer Vision, vol. 45, no. 1, pp. 25–38, 2001. View at: Google Scholar
  3. J. Daugman, “Demodulation by complex-valued wavelets for stochastic pattern recognition,” International Journal of Wavelets, Multiresolution and Information Processing, vol. 1, no. 1, pp. 1–17, 2003. View at: Publisher Site | Google Scholar
  4. J. Daugman, “How iris recognition works,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21–30, 2004. View at: Publisher Site | Google Scholar
  5. S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 7, pp. 710–732, 1992. View at: Publisher Site | Google Scholar
  6. M. Nabti and A. Bouridane, “An effective and fast iris recognition system based on a combined multiscale feature extraction technique,” Pattern Recognition, vol. 41, no. 3, pp. 868–879, 2008. View at: Publisher Site | Google Scholar
  7. R. P. Wildes, “Iris recognition: an emerging biometric technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348–1363, 1997. View at: Publisher Site | Google Scholar
  8. P. R. wildes, C. J. asmuth, K. hanna et al., “Automated, non-invasive iris recognition system and method,” 1996. View at: Google Scholar
  9. V. Némesin and S. Derrode, “Quality-driven and real-time iris recognition from close-up eye videos,” Signal, Image and Video Processing, vol. 10, pp. 153–160, 2016. View at: Google Scholar
  10. B. Kang and K. Park, “A study on iris image restoration,” 2005. View at: Google Scholar
  11. B. Ait El Fquih and F. Desbouvries, “Kalman filtering for triplet Markov chains: applications and extensions,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’05), vol. 4, New York, NY, USA, 2005. View at: Google Scholar
  12. C. A. Perez, V. A. Lazcano, and P. A. Estevez, “Real-time iris detection on coronal-axis-rotated faces,” IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), vol. 37, no. 5, pp. 971–978, 2007. View at: Publisher Site | Google Scholar
  13. M. Rizon, T.-Y. Chai, A. Almejrad, and N. Alajlan, “Real-time iris detection,” Artificial Life and Robotics, vol. 15, pp. 296–301, 2010. View at: Google Scholar
  14. P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR 2001, vol. 1, New, York, NY, USA, 2001. View at: Google Scholar
  15. M. Arsalan, R. Naqvi, D. Kim, P. Nguyen, M. Owais, and K. Park, “IrisDenseNet: robust Iris segmentation using densely connected fully convolutional networks in the images by visible light and near-infrared light camera sensors,” Sensors, vol. 18, no. 5, p. 1501, 2018. View at: Publisher Site | Google Scholar
  16. M. Jenadeleh, M. Pedersen, and D. Saupe, “Blind quality assessment of iris images acquired in visible light for biometric recognition,” Sensors, vol. 20, no. 5, p. 1308, 2020. View at: Publisher Site | Google Scholar
  17. R. Tobji, W. DI, and N. Ayoub, “A synthetic fusion rule based on flda and pca for iris recognition using 1d log-gabor filter,” Mathematical Problems in Engineering, vol. 11, 2019. View at: Google Scholar
  18. Q. Zhang, X. Zhu, Y. Liu et al., “Iris recognition based on adaptive optimization log-gabor filter and rbf neural network,” in Biometric Recognition, Z. Sun, R. He, J. Feng, S. Shan, and Z. Guo, Eds., pp. 312–320, Springer International Publishing, Berlin, Germany, 2019. View at: Google Scholar
  19. M. T. Khan, D. Arora, and S. Shukla, “Feature extraction through iris images using 1-d gabor filter on different iris datasets,” in Proceedings of the 2013 Sixth International Conference on Contemporary Computing (IC3), pp. 445–450, Berlin, Germany, 2013. View at: Google Scholar
  20. W. W. Boles, “A security system based on human iris identification using wavelet transform,” in Proceedings of the 1st International Conference on Conventional and Knowledge Based Intelligent Electronic Systems KES ’97, vol. 2, pp. 533–541, Berlin, Germany, 1997. View at: Google Scholar
  21. W. W. Boles and B. Boashash, “A human identification technique using images of the iris and wavelet transform,” IEEE Transactions on Signal Processing, vol. 46, no. 4, pp. 1185–1188, 1998. View at: Publisher Site | Google Scholar
  22. C. S. Chin, B. J. Andrew Teoh, and C. L. David Ngo, “Tokenised discretisation in iris verification,” IEICE Electronics Express, vol. 2, no. 11, pp. 349–355, 2005. View at: Publisher Site | Google Scholar
  23. R. Mansour, “Iris recognition using gauss laplace filter,” American Journal of Applied Sciences, vol. 13, 2016. View at: Google Scholar
  24. X. Yuan and P. Shi, “Efficient iris recognition system based on iris anatomical structure,” IEICE Electronics Express, vol. 4, no. 17, pp. 555–560, 2007. View at: Publisher Site | Google Scholar
  25. F. Rossant, M. T. Eslava, E. A. Thomas, F. Amiel, and A. Amara, “Iris identification and robustness evaluation of a wavelet packets based algorithm,” IEEE International Conference on Image Processing, vol. 3, 2005. View at: Google Scholar
  26. E. Rydgren, E. A. Thomas, F. Amiel, F. Rossant, and A. Amara, “Iris features extraction using wavelet packets,” International Conference on Image Processing, vol. 2, pp. 861–864, 2004. View at: Google Scholar
  27. B. Akrout, I. Khanfir, C. Ben Amar, and B. Ben Amor, “A new scheme of signature extraction for iris authentication,” 2009. View at: Google Scholar
  28. I. K. Kallel, S. A. Bouhamed, and B. Akrout, “Clever use of meyer wavelet for iris recognition,” in Proceedings of the 2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), vol. 1–6, Berlin, Germany, 2017. View at: Google Scholar
  29. Y-P. Huang, S-W. Luo, and E-Y Chen, “An efficient iris recognition system,” International Conference on Machine Learning and Cybernetics, vol. 1, pp. 450–454, 2002. View at: Google Scholar
  30. A. Aminu Ghali, S. Jamel, K. Mohamad, S. K. Ahmad Khalid, Z. Pindar, and M. Mat Deris, “An improved low contrast image in normalization process for iris recognition system,” 2018. View at: Google Scholar
  31. L. Yin and A. Basu, “Mpeg4 face modeling using fiducial points,” in Proceedings of the International Conference on Image Processing, vol. 1, pp. 109–112, Berlin, Germany, 1997. View at: Google Scholar
  32. Y. Du, B. Bonney, R. Ives, D. Etter, and R. Schultz, “Analysis of partial iris recognition using a 1d approach,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’05), vol. 2, New York, NY, USA, 2005. View at: Google Scholar
  33. P. Mukhopadhyay and B. B. Chaudhuri, “A survey of hough transform,” Pattern Recognition, vol. 48, no. 3, pp. 993–1010, 2015. View at: Publisher Site | Google Scholar
  34. N. A. Leontiev and A. G. Nyurova, “The use of discrete meyer wavelet for speech segmentation,” in Proceedings of the 2019 International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon), vol. 1–3, Berlin, Germany, 2019. View at: Google Scholar
  35. J. Daugman, “High confidence personal identification by rapid video analysis of iris texture,” in Proceedings of the 1992 International Carnahan Conference on Security Technology: Crime Countermeasures, vol. 50–60, New York, NY, USA, 1992. View at: Google Scholar
  36. T. Tan, Z. He, and Z. Sun, “Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition,” Image and Vision Computing, vol. 28, no. 2, pp. 223–230, 2010. View at: Publisher Site | Google Scholar
  37. B. Akrout and W. Mahdi, “Spatio-temporal features for the automatic control of driver drowsiness state and lack of concentration,” Machine Vision and Applications, vol. 26, no. 1, pp. 1–13, 2015. View at: Publisher Site | Google Scholar
  38. J. María Díaz Barros, F. Garcia, B. Mirbach, and D. Stricker, “Real-time monocular 6-dof head pose estimation from salient 2d points,” in Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), vol. 121–125, New Jersey, NJ, USA, 2017. View at: Google Scholar
  39. F. Vicente, Z. Huang, X. Xiong, F. De la Torre, W. Zhang, and D. Levi, “Driver gaze tracking and eyes off the road detection system,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 4, pp. 2014–2027, 2015. View at: Google Scholar
  40. S. T. Moore, I. S. Curthoys, and S. G. McCoy, “Vtm-an image-processing system for measuring ocular torsion,” Computer Methods and Programs in Biomedicine, vol. 35, no. 3, pp. 219–230, 1991. View at: Publisher Site | Google Scholar
  41. H. Scherer, W. Teiwes, and A. H. Clarke, “Measuring three dimensions of eye movement in dynamic situations by means of videooculography,” Acta Oto-Laryngologica, vol. 111, no. 2, pp. 182–187, 1991. View at: Publisher Site | Google Scholar
  42. P. Iglesias, R. Hernández-García, R. J. Barrientos, E. Goncalves, and M. Mora, “Iris recognition based on displacement information using a sparse matching technique,” in Proceedings of the 2019 38th International Conference of the Chilean Computer Science Society (SCCC), vol. 1–8, New York, NY, USA, 2019. View at: Google Scholar
  43. S. Pradeepa, R. Anisha, and W. J. Jenkin, “Classifiers in iris biometrics for personal authentication,” in Proceedings of the 2019 2nd International Conference on Signal Processing and Communication (ICSPC), vol. 352–355, Berlin, Germany, 2019. View at: Google Scholar

Copyright © 2020 Belhassen Akrout and Sana Fakhfakh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views231
Downloads295
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.