About this Journal Submit a Manuscript Table of Contents
Computational Intelligence and Neuroscience

Volume 2014 (2014), Article ID 380585, 11 pages

http://dx.doi.org/10.1155/2014/380585
Research Article

Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh

Received 15 December 2013; Revised 29 May 2014; Accepted 19 June 2014; Published 10 July 2014

Academic Editor: Daoqiang Zhang

Copyright © 2014 Md. Rabiul Islam. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

1. Introduction

Biometrics deals with identification of individuals based on their biological or behavioral characteristics which provides the significant component of automatic person identification technology based on a unique feature like face, iris, retina, speech, palmprint, hand geometry, signature, fingerprint, and so forth [1]. Iris recognition technology is the most reliable existing biometric systems available because of the unique feature of human iris. Iris has some unique features such as accuracy, uniqueness, high information content, stability, reliability, and real-time access capability compared with other biometric patterns [2, 3]. Such unique feature in the anatomical structure of the iris facilitates the differentiation among individuals. The human iris pattern is not changeable and is constant over person’s lifetime from one year of age until death [1, 4]. Iris is a thin circular diaphragm which is a part between the blackish pupil and the whitish sclera [5]. Because of this uniqueness and stability, iris recognition is a reliable person identification technique [6].

Though unimodal biometric system performs well in some of the cases, it involves a variety of problems like nonuniversality, susceptibility of spoofing, noises of sensed information, intraclass variations, and interclass similarities [7]. Multimodal biometric system (i.e., the combination of more than one unimodal biometric system) can solve some of the above-mentioned problems. Recently, scientists are attempting to combine multiple modalities for person identification which are referred to as multimodal biometrics. Multimodal biometric identification systems are capable of utilizing more than one physical, behavioral, or chemical characteristic for enrollment. Multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically provide better recognition performance compared to systems based on a single biometric modality. For combining the multimodal information, generally four different fusion strategies, that is, sensor level fusion, feature level fusion, score level fusion, and decision level fusion, have been used for various multimodal systems [8, 9]. In sensor level fusion, information is taken from multiple sensors that can be processed and integrated to generate a new vector. Extracted features are fused to produce a combined vector in feature level fusion. For score level fusion, scores are counted from different classifiers and combined to get the final result. Lastly, several classifiers output is combined to achieve the multimodal biometric system in decision fusion approach which includes Multiple Classifier Selection (MCS) and Dynamic Classifier Selection (DCS).

In this paper, a new hybrid fusion based pair of iris recognition approaches is proposed where three different fusion levels such as feature fusion, score fusion, and decision fusion are used. Discrete Hidden Markov Model (DHMM) has been used to classify the input iris pattern. For decision fusion, four different classifiers output, that is, left iris based unimodal DHMM classifier, right iris based unimodal DHMM classifier, left-right iris feature fusion based multimodal DHMM classifier, and left-right iris log-likelihood ratio score fusion based multimodal DHMM classifier, is combined to achieve the final result. The next sections of the paper deal with related work and proposed fusion scheme, automated iris segmentation and feature extraction techniques, and experimental results of each of the four different classifiers and their combined output with the comparison of existing multimodal iris recognition system.

2. Literature Review and Architecture of the Proposed System

A very little amount of work has been done in iris recognition where multiple modalities of iris have been used. Most of the multimodal iris biometric work combines iris with other biometric techniques. The largest cluster of papers in this area deals with the combination of face and iris and another multibiometric system involves almost any combination of iris and some other multimodalities like fingerprint, palmprint, speech, and so forth [11]. Two different fusion strategies that is, feature and score fusion have been used in those works. Researchers used these fusion techniques with different strategies in the areas of multimodal techniques. In feature fusion, person identification has performed by combining iris and facial feature vector [12]. Wang et al. also used face and iris common feature vector for multimodal biometric recognition [13, 14]. Rattani and Tistarelli proposed a multiunit multimodal biometric system where face, left iris and right iris image vector were used and fused these three different features to enhance the performance [15]. Iris and fingerprint multimodal feature fusion based cryptographic key generation system [16] and person identification system [17] were developed. Various multimodal score level fusion schemes were also proposed by different researchers. In [18], different algorithms were used to calculate the score of iris and facial multimodalities and Support Vector Machine (SVM) was used to combine those scores. User specific score fusion approach of iris and face [19], iris and fingerprint based multimodal score fusion [20], score fusion of iris, and palmprint multibiometric system [21] were also developed.

Some of the fusion approaches were proposed for iris recognition where also feature and score fusion methods were used. By using iris feature fusion method, Vatsa et al. combined left and right iris image for multimodal biometric system [22]. Hollingsworth et al. [23] created a single average image from multiple frames from iris video which showed that single level fusion is better than multigallery score fusion method. In score fusion method, Ma et al. [24] used three different templates from an iris image and averaged their scores to get the final result. Minimum match score was used instead of iris image averaging to collect the final score for iris biometric recognition by Krichen et al. [25]. Schmid et al. developed log-likelihood ratio method to fuse the score which can better perform than Hamming distance based score fusion [26].

Hollingsworth et al. [27] proposed an approach of image averaging using single level feature from iris image to improve the matching performance. They compared their system with three reimplemented score fusion approaches of Ma et al. [24], Krichen et al. [25], and Schmid et al. [26]. They reported that the proposed feature fusion approach performed better than Ma’s or Krichen’s score level fusion methods of Hamming distance scores and also performed better than Schmid’s log-likelihood method of score fusion.

In this proposed work, a combined approach of feature fusion and score fusion based pair of iris multibiometric systems has developed which is shown in Figure 1. Features from left and right iris images are fused and log-likelihood ratio based score fusion method is applied to find the score of iris recognition. Finally, to take the final recognition result, voting method is applied to combine the output of Multiple Classifier Selection (MCS) such as the individual modality of each iris (i.e., left and right iris), feature fusion based modality, and log-likelihood ratio based modality. Discrete Hidden Markov Model (DHMM) has been used as a classifier and Principal Component Analysis (PCA) has been applied to reduce the dimensionality of the iris feature in different levels of the overall proposed approach. Various experiments have been performed on the proposed feature and decision fusion Based MCS approach with existing hamming distance score fusion approach proposed by Ma et al. [24], log-likelihood ratio score fusion approach proposed by Schmid et al. [26], and feature fusion approach proposed by Hollingsworth et al. [27].

380585.fig.001
Figure 1: Block diagram of the proposed feature and decision fusion based Multiple Classifier Selection for iris recognition.

3. Iris Segmentation and Feature Extraction

Since iris image preprocessing and feature extraction play a vital role of the overall recognition performance, standard iris localization, segmentation, normalization, and feature encoding procedure have been applied in the proposed system.

The first step is to isolate the actual iris region from the eye image. For this, an edge map is created using edge detection algorithm with canny directional edge detector. Figure 2(b) shows the edge map after applying canny edge detector. To extract the original iris region, two different boundaries have to be predicted. As a result, a vertical edge map has been created for detecting the outer boundary, that is, iris/sclera boundary, and to observe the inner boundary, that is, pupil/iris boundary, a horizontal edge map has been created. Circular Hough transform has been applied to deduce the center coordinates and radius for the outer and inner boundaries of the iris [28, 29]. The results of using circular Hough transform are shown in Figure 2(c). After doing that, we have to remove those parts from the iris, which are overlapped by eyelids. Eyelids were isolated by first fitting a line to the upper and lower eyelid using the linear Hough transform [30]. A second horizontal line is then drawn, which intersects with the first line at the iris edge that is closest to the pupil. This process is done for both the top and bottom eyelids. The second horizontal line allows maximum isolation of eyelid regions. The result of removing the eyelids region is shown in Figure 2(d).

fig2
Figure 2: (a) Eye image, (b) edge map using canny edge detector, (c) results of circular Hough transform, and (d) removing the eyelids part.

In normalization, iris region is transformed into a fixed dimension so that one can compare two different iris images with the same spatial location. Dimension inconsistencies of the iris region occur due to stretching of iris caused by pupil dilation for varying levels of light illumination, image capturing distance, angular deflection of camera and eye, and so forth. Normalization process removes all of the above-mentioned difficulties to produce a fixed size iris vector. Daugman’s Rubber Sheet Model [31] has been used to normalize the iris region. In this process, each point of the iris region converted into a pair of polar coordinates , where is on the interval between 0 and 1 and is the angle between 0 and 2π which is shown in Figure 3. Problem can occur for rotation of the eye within the eye socket in iris image. For this reason, the center of the pupil has been considered as the reference point and a number of data points are selected along each radial line which is called the radial resolution. In this way, we can get the fixed dimension of iris region which is shown in Figure 3(b).

380585.fig.003
Figure 3: (a) Normalization process and (b) iris region with fixed dimension.

For extracting the feature, it is important to extract the most important information presented in the iris region. There are different alternative techniques that can be used for feature extraction which includes Gabor filtering techniques, Zero-crossing 1D wavelet filters, Log-Gabor filters, and Haar wavelet. In this work, Log-Gabor filtering technique has been applied to extract the iris features effectively. 9600 feature values have been taken from each iris region. Principal Component Analysis method has been used to reduce the dimension of the feature vector where 550 feature values have been taken. Feature extraction and dimensionality reduction process are shown in Figure 4.

380585.fig.004
Figure 4: Feature encoding and dimensionality reduction process.

4. Iris Recognition Using HMM Classifier

In training phase of the proposed iris recognition system, for each iris , DHMM (Discrete HMM), has been built [32]. The model parameters have been estimated to optimize the likelihood of the training set observation vector for the th iris by using Baum-Welch algorithm. The Baum-Welch reestimation formula has been considered as follows [33]: where In the testing phase, for each unknown iris to be recognized, the processing shown in Figure 5 has been carried out. This procedure includes(i)measurement of the observation sequence, , via a feature analysis of the iris pattern,(ii)transforming the continuous values of into integer values,(iii)calculation of model likelihoods for all possible models, , ,(iv)declaration of the iris as person whose model likelihood is the highest—that is, In this proposed work, the probability computation step has been performed using Baum’s Forward-Backward algorithm [33, 34].

380585.fig.005
Figure 5: Block diagram of DHMM based iris recognition system (modified) [10].

Medium sized iris dataset has been used for most of the recent iris recognition tasks. However it is difficult to evaluate the performance accurately with this problem. However, more and more large-scale iris recognition systems are deployed in real-world applications. Many new problems are met in classification and indexing of large-scale iris image databases. So scalability is another challenging issue in iris recognition. CASIA-IrisV4 iris database has been released to promote research on long-range and large-scale iris recognition systems. As a result to measure the performance of the overall iris recognition system, CASIA-IrisV4 iris database [35] has been used.

CASIA-IrisV4 database was developed and released to the international biometrics community and updated from CASIA-IrisV1 to CASIA-IrisV3 since 2002. More than 3,000 users from 70 countries or regions have downloaded CASIA-Iris and much excellent work on iris recognition has been done based on these iris image databases. CASIA-IrisV4 contains 54601 iris images where 1800 are genuine and 1000 are virtual subjects. Iris images are collected under near infrared illumination or synthesized and represented as 8-bit gray level. There are six data sets which were collected at different times and four different methodologies were used, that is, CASIA-Iris-Interval, CASIA-Iris-Lamp, CASIA-Iris-Distance, and CASIA-Iris-Thousand. Since CASIA-Iris-Interval is well suited for the detailed texture features of iris images which are captured by close-up iris camera, this dataset has been used for the experimental work of the proposed system. The most compelling feature of this database is to design a circular NIR LED array with suitable luminous flux for iris imaging.

For measuring the accuracy of individual left iris and right iris based unimodal recognition system, the critical paramerter, that is, the number of hidden states of DHMM, can affect the performance of the system. A tradeoff is made to explore the optimum value of the number of hidden states and comparison results with Receiver Operating Characteristics (ROC) curve are shown in Figure 6 which represents the left and right iris based unimodal recognition performance combinedly.

380585.fig.006
Figure 6: ROC curve of left iris and right iris based unimodal system.

5. Feature Fusion Based HMM Classifier

Feature level fusion of iris recognition can significantly improve the performance of a multibiometric system besides improving population coverage, deterring spoof attacks, increasing the degrees-of-freedom, and reducing the failure-to-enroll rate. Although the storage requirements, processing time, and the computational demands of feature fusion based system are much higher than unimodal system for iris recognition [36], effective integration of left and right iris features can remove or reduce most of the above-mentioned problems. Feature level fusion of left and right iris features is an important fusion strategy which can improve overall system performance of the proposed system. In feature level fusion, sufficient information can exist compared with score level fusion and decision level fusion. As a result, it can be expected that feature level fusion can achieve greater performance over other fusion strategies of the proposed multimodal iris recognition system.

Feature level fusion can be found by simple concatenation of the feature sets taken from left and right information source. By concatenating two feature vectors, and , a new feature vector, ,   , has been created. The objective is to combine these two feature sets in order to create a new feature vector, , that would better represent the individual. To combine left and right iris feature vectors, the dimensionality of new feature vector is very large. As a result, dimensionality reduction technique is necessary to reduce the searching domain of learned database. The feature selection process chooses a minimal feature set of , where that improves the performance on a trained set of feature vectors. The objective of this phase is to find out optimal subset of features from the complete feature set. Principal Component Analysis [37] has been used to reduce the dimensionality of the feature vector. Figure 7 shows the process of left and right iris feature fusion based multimodal technique of the proposed system.

380585.fig.007
Figure 7: Feature fusion based multimodal iris recognition system.

In feature level fusion, optimum value of the number of hidden states of DHMM has been chosen and Figure 8 shows the comparison results with ROC curve. Figure 9 shows the performance comparison among unimodal left iris, unimodal right iris, and left-right iris feature fusion based recognition. Since the primary goal of left-right iris feature fusion based multimodal iris recognition system is to achieve the performance which is equal to or better than the performance of any left or right unimodal iris recognition system. When the noise level is high of right iris, the left iris unimodal system performs better than the right iris unimodality; thus the left-right iris recognition performance should be at least as good as that of the right iris unimodal system. When the noise level is high of left iris, the right iris recognition performance is better than the left one and the integrated performance should be at least the same as or better than the performance of the right iris recognition. The system also works very well when left and right iris image do not contain noises. Figure 9 shows the above-mentioned performance of the left-right feature fusion based multimodal iris recognition.

380585.fig.008
Figure 8: Recognition performance of left-right iris feature fusion based multimodal system among different number of hidden states of DHMM.
380585.fig.009
Figure 9: Performance comparison among unimodal left iris, unimodal right iris, and left-right iris feature fusion based multimodal recognition system.

6. Likelihood Ratio Score Fusion Based HMM Classifier

In left-right iris feature fusion based method, the left iris and right iris features from all modalities are combined into one high dimensional vector. But the method considers all modalities with equal weight and it is the main disadvantage of the feature fusion based multimodal method. This problem can be solved by using different weights according to different noise conditions of the left and right iris modalities. The method of score fusion by assigning weights of each modality can be used for this purpose. This also allows the dynamic adjustment of the importance of each stream through the weights according to its estimated reliability.

Left-right iris likelihood ratio based score fusion method is a score fusion technique where the reliability of each modality is measured by using the output of DHMM classifier for both the left and right iris features. If one of the modalities becomes corrupted by noise, the other modality filling the gap by making its weight and the recognition rate will increase to use this score fusion based method. This is also valid in the case of a complete interruption of one stream. In this case, the corrupted modality should be close to maximum and the weight assigned to the missing stream close to zero. This practically makes the system revert to single stream recognition automatically. This process is instantaneous and also reversible; that is, if the missing stream is restored, the modality would decrease and the weight would increase to the level before the interruption. The integrated weight which determines the amount of contribution from each modality in left-right iris score fusion based recognition system is calculated from the relative reliability of the two modalities. Figure 10 shows the process of using likelihood ratio score fusion based pair of iris recognition systems.

380585.fig.0010
Figure 10: Process of likelihood ratio score fusion based pair of iris recognition systems.

To calculate the score fusion result, the DHMM outputs of individual iris (i.e., left and right) recognition are combined by a weighted sum rule to produce the final score fusion result. For a given left-right iris test datum of and , the recognition utterance is given by [38] where and are the left iris and the right iris DHMMs for the utterance class, respectively, and and are their log-likelihood against the class.

The weighting factor determines the contribution of each modality for the final decision. From the two most popular integration approaches such as baseline reliability ratio based integration and -best recognition hypothesis reliability ratio based integration, baseline reliability ratio based integration has been used where the integration weight is calculated from the reliability of each individual modality. The reliability of each modality can be calculated by the most appropriate and best in performance as [39] which means the average difference between the maximum log-likelihood and the other ones are used to determine the reliability of each modality. is the number of classes being considered to measure the reliability of each modality, .

Then the integration weight of left iris reliability measure can be calculated by [40] where and are the reliability measure of the outputs of the left iris and right iris DHMMs, respectively.

The integration weight of right iris modality measure can be found as The results after applying score fusion approach for iris recognition system are shown in Figure 11. The results show the comparison among unimodal left iris, unimodal right iris, and left-right iris score fusion based multimodal recognition system. Here, the score fusion approach achieves higher recognition rate than any individual unimodal system of left and right iris.

380585.fig.0011
Figure 11: Results of comparison among unimodal left iris, unimodal right iris, and multimodal left-right iris recognition system.

7. Multiple Classifier Fusion for the Proposed System

An effective way to combine multiple classifiers is required when a set of classifiers outputs are created. Various architectures and schemes have been proposed for combining multiple classifiers [41]. The majority vote [4245] is the most popular approach. Other voting schemes include the maximum, minimum, median, nash [46], average [47], and product [48] schemes. Other approaches to combine classifiers include the rank-based methods such as the Borda count [49], the Bayes approach [44, 45], the Dempster-Shafer theory [45], the fuzzy integral [50], fuzzy connectives [51], fuzzy templates [52], probabilistic schemes [53], and combination by neural networks [54]. Majority, average, maximum, and nash voting techniques [41, 47] have been used to find out the most efficient voting technique for combining four classifiers output in this work.

In majority voting technique, the correct class is the one most often chosen by different classifiers. If all the classifiers indicate different classes or in the case of a tie then the one with the highest overall output is selected to be the correct class. For maximum voting technique, the class with the highest overall output is selected as the correct class, where is the number of classifiers and represents the output of the th classifier for the input vector .

Averaging voting technique averages the individual classifier outputs confidence for each class across all of the ensemble. The class output yielding the highest average value is chosen to be the correct class, where is the number of classes and represents the output confidence of the th classifier for the th class for the input .

In nash voting technique, each voter assigns a number between zero and one for each candidate and then compares the product of the voter’s values for all the candidates. The highest is the winner: Different types of voting techniques, that is, average vote, maximum vote, nash vote, and majority vote, have been applied to measure the accuracy of the proposed system. Figure 12 shows the comparison results of different types of voting techniques. Though the recognition rates are very close to each other of the voting techniques, majority voting technique gives the highest recognition rate of the proposed system.

380585.fig.0012
Figure 12: Results of different types of voting techniques for the proposed system.

The results of each unimodal system of iris, left-right iris feature fusion based multimodal system, likelihood ratio based score fusion based multimodal system, and MCS based majority voting technique are compared, which is shown in Figure 13. From the result, it has been shown that MCS based majority voting technique achieves higher performance than any other existing approach of iris recognition system.

380585.fig.0013
Figure 13: Performance comparison among each unimodal, multimodal, and MCS based majority voting technique for iris recognition.

8. Performance Analysis of the Proposed System

The proposed approach of feature and score fusion based Multiple Classifier Selection (MCS) performance has been compared with existing hamming distance score fusion approach proposed by Ma et al. [24], log-likelihood ratio score fusion approach proposed by Schmid et al. [26], and feature fusion approach proposed by Hollingsworth et al. [27]. Figure 14 shows the results where the proposed system has achieved the highest recognition rate over all of the above-mentioned existing iris recognition system. Hamming distance score fusion approach proposed by Ma et al. has been rebuilt for measuring the performance comparison with the proposed approach. From the ROC curve, it is shown that existing feature fusion approach of Hollingsworth et al. [27] approach gives higher recognition result compared with hamming distance score fusion approach of Ma et al. [24] and log-likelihood ratio score fusion approach of Schmid et al. [26]. Finally, the proposed feature and decision fusion based MCS system performs all over the existing multimodal such as any feature fusion and any score fusion approach. The reason is that the existing approaches applied only either feature fusion or score fusion technique. But in this proposed approach, feature fusion and score fusion techniques are combined with individual left iris and right iris recognition technique. These four different classifiers output (i.e., unimodal left iris recognition classifier, unimodal right iris recognition classifier, left-right iris feature fusion based classifier, and left-right iris likelihood ratio score fusion based classifier) is combined using Multiple Classifier Selection (MCS) through majority voting technique. Since four classifiers are used as the input for the majority voting technique, there is a chance for a tie. In that case, since left-right iris feature fusion based multimodal system can achieve higher performance than any other unimodal and likelihood ratio score fusion based multimodal system which is shown in Figure 12, the output of left-right iris feature fusion based multimodal system output has been taken to break the tie. Two unimodal systems are used here because if one unimodal system fails to recognize then the other unimodal system retains the accurate output as an associate with each other. Since the feature set contains richer information about the raw biometric data than the final decision, integration at feature level fusion is expected to provide better recognition results. As a result, left and right iris feature fusion have applied to improve the performance of the proposed system. In feature fusion, the features for both left and right iris modalities are integrated with equal weights but decision of different classifiers can be fused with different weights according to the noise level of left and right iris. Likelihood ratio score fusion based iris recognition system has been applied in this proposal to combine the classifier output nonequally. When these four different classifiers outputs are combined with MCS based majority voting technique, the proposed multimodal system takes all of the above-mentioned advantages which gives the highest recognition rate than other existing approaches of Ma et al., Schmid et al., and Hollingsworth et al. proposed multimodal iris recognition.

380585.fig.0014
Figure 14: Performance comparison between proposed and different existing approaches of multimodal iris recognition.

The time required to fuse the output of the classifiers is directly proportional to the number of modalities used for the proposed system. Since four different classifiers are used in this system, the learning and testing time will increase. The execution time of the proposed system is increased enough with the uses of the number of classifiers for majority voting method. Reduced processing time of the proposed system might be the further work of this system such that the proposed system can work like real-time environment and large population supported applications.

9. Conclusions and Observations

Experimental results show the superiority of the proposed multimodal feature and score fusion based MCS system over existing multimodal iris recognition systems proposed by Ma et al., Schmid et al., and Hollingsworth et al. Though CASIA-IrisV4 dataset has been used for measuring the performance of the proposed system, the database has some limitations. CASIA iris database does not contain specular reflections due to the use of near infrared light for illumination. However, some other iris databases such as LEI, UBIRIS, and ICE contain the iris images with specular reflections and few noise factors, which are caused by imaging under different natural lighting environments. The proposed system can be tested on the above-mentioned databases to measure the performance with natural lighting conditions at various noise levels. Since the proposed system can work for the offline environment, the execution time can be reduced so that it can work for real-time applications.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

References

  1. A. K. Jain, R. M. Bolle, and S. Pankanti, BIOMETRICS Personal Identification in Networked Society, Kluwer Academic Press, Boston, Mass, USA, 1999.
  2. W.-S. Chen and S.-Y. Yuan, An Automatic Iris Recognition System Based on Fractal Dimension, VIPCC Laboratory, Department of Electrical Engineering, National Chi Nan University, 2006.
  3. S. Devireddy, S. Kumar, G. Ramaswamy, et al., “A novel approach for an accurate human identification through iris recognition using bitplane slicing and normalisation,” Journal of Theoretical and Applied Information Technology, vol. 5, no. 5, pp. 531–537, 2009. View at Google Scholar
  4. J. Daugman, “Biometric Personal Identification System Based on Iris Analysis,” United States Patent number 5291560, 1994.
  5. O. Al-Allaf, A. AL-Allaf, A. A. Tamimi, and S. A. AbdAlKader, “Artificial neural networks for iris recognition system: comparisons between different models, architectures and algorithms,” International Journal of Information and Communication Technology Research, vol. 2, no. 10, 2012. View at Google Scholar
  6. R. H. Abiyev and K. Altunkaya, “Personal Iris recognition using neural network,” International Journal of Security and its Applications, vol. 2, no. 2, pp. 41–50, 2008. View at Google Scholar · View at Scopus
  7. R. Brunelli and D. Falavigna, “Person identification using multiple cues,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 10, Article ID 505417, pp. 955–17, 1995. View at Publisher · View at Google Scholar · View at Scopus
  8. A. Jaina, K. Nandakumara, and A. Rossb, “Score normalization in multimodal biometric systems,” Pattern Recognition, vol. 38, no. 12, pp. 2270–2285, 2005. View at Publisher · View at Google Scholar
  9. P. K. Atrey, M. A. Hossain, A. El Saddik, and M. S. Kankanhalli, “Multimodal fusion for multimedia analysis: a survey,” Multimedia Systems, vol. 16, no. 6, pp. 345–379, 2010. View at Publisher · View at Google Scholar
  10. L. Rabiner and B. H. Juang, Fundamentals of Speech Recognition, Prentice Hall, Englewood Cliffs, NJ, USA, 1993.
  11. K. W. Bowyer, K. P. Hollingsworth, and P. J. Flynn, “A survey of iris biometrics research: 2008–2010,” in Handbook of Iris Recognition, M. Burge and K. W. Bowyer, Eds., pp. 15–54, Springer, 2012. View at Google Scholar
  12. Z. Wang, Q. Han, X. Niu, and C. Busch, “Feature-level fusion of iris and face for personal identification,” in Advances in Neural Networks—ISNN 2009, vol. 5553, pp. 356–364, Lecture Notes in Computer Science, 2009. View at Google Scholar
  13. Z. F. Wang, Q. Han, Q. Li, X. M. Niu, and C. Busch, “Complex common vector for multimodal biometric recognition,” Electronics Letters, vol. 45, no. 10, pp. 495–497, 2009. View at Publisher · View at Google Scholar · View at Scopus
  14. Z. Wang, Q. Li, X. Niu, and C. Busch, “Multimodal biometric recognition based on complex KFDA,” in Proceedings of the 5th International Conference on Information Assurance and Security (IAS '09), pp. 177–180, Xi'an, China, September 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. A. Rattani and M. Tistarelli, “Robust multi-modal and multi-unit feature level fusion of face and iris biometrics,” Lecture Notes in Computer Science, vol. 5558, pp. 960–969, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Jagadeesan, T. Thillaikkarasi, and K. Duraiswamy, “Protected bio-cryptography key invention from multimodal modalities: feature level fusion of fingerprint and Iris,” European Journal of Scientific Research, vol. 49, no. 4, pp. 484–502, 2011. View at Google Scholar · View at Scopus
  17. U. Gawande, M. Zaveri, and A. Kapur, “A novel algorithm for feature level fusion using SVM classifier for multibiometrics-based person identification,” Applied Computational Intelligence and Soft Computing, vol. 2013, Article ID 515918, 11 pages, 2013. View at Publisher · View at Google Scholar
  18. F. Wang and J. Han, “Multimodal biometric authentication based on score level fusion using support vector machine,” Opto-Electronics Review, vol. 17, no. 1, pp. 59–64, 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. N. Morizet and J. Gilles, “A new adaptive combination approach to score level fusion for face and iris biometrics combining wavelets and statistical moments,” Advances in Visual Computing: Lecture Notes in Computer Science, vol. 5359, no. 2, pp. 661–671, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. A. Baig, A. Bouridane, F. Kurugollu, and G. Qu, “Fingerprint-iris fusion based identification system using a single Hamming distance,” in Proceedings of the Symposium on Bio-inspired Learning and Intelligent Systems for Security, pp. 9–12, August 2009.
  21. J. Wang, Y. Li, X. Ao, C. Wang, and J. Zhou, “Multi-modal biometric authentication fusing iris and palmprint based on GMM,” in Proceedings of the 15th IEEE/SP Workshop on Statistical Signal Processing (SSP '09), pp. 349–352, Cardiff, Wales, August-September 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. M. Vatsa, R. Singh, A. Noore, and S. K. Singh, “Belief function theory based biometric match score fusion: Case studies in multi-instance and multi-unit iris verification,” in Proceedings of the 7th International Conference on Advances in Pattern Recognition, pp. 433–436, February 2009. View at Publisher · View at Google Scholar · View at Scopus
  23. K. Hollingsworth, K. W. Bowyer, and P. J. Flynn, “Image averaging for improved iris recognition,” in Advances in Biometrics, vol. 5558 of Lecture Notes in Computer Science, pp. 1112–1121, 2009. View at Google Scholar
  24. L. Ma, T. Tan, Y. Wang, and D. Zhang, “Efficient iris recognition by characterizing key local variations,” IEEE Transactions on Image Processing, vol. 13, no. 6, pp. 739–750, 2004. View at Publisher · View at Google Scholar · View at Scopus
  25. E. Krichen, L. Allano, S. Garcia-Salicetti, and B. Dorizzi, “Specific texture analysis for iris recognition,” in Proceedings of the 5th International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA '05), pp. 23–30, Hilton Rye Town, NY, USA, July 2005.
  26. N. A. Schmid, M. V. Ketkar, H. Singh, and B. Cukic, “Performance analysis of iris-based identification system at the matching score level,” IEEE Transactions on Information Forensics and Security, vol. 1, no. 2, pp. 154–168, 2006. View at Publisher · View at Google Scholar · View at Scopus
  27. K. Hollingsworth, T. Peters, K. W. Bowyer, and P. J. Flynn, “Iris recognition using signal-level fusion of frames from video,” IEEE Transactions on Information Forensics and Security, vol. 4, no. 4, pp. 837–848, 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. L. Ma, Y. Wang, and T. Tan, “Iris recognition using circular symmetric filter,” in Proceedings of the 16th International Conference on Pattern Recognition, vol. 2, pp. 414–417, 2002. View at Publisher · View at Google Scholar
  29. R. P. Wildes, J. C. Asmuth, G. L. Green et al., “System for automated iris recognition,” in Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision, pp. 121–128, Sarasota, Fla, USA, December 1994. View at Scopus
  30. L. Masek, Recognition of human iris patterns for biometric identification [M. S. Thesis dissertation], School of Computer Science and Software Engineering, The University of Western Australia, 2003.
  31. S. Sanderson and J. Erbetta, “Authentication for secure environments based on iris scanning technology,” in Proceedings of the IEE Colloquium on Visual Biometrics, pp. 8/1–8/7, London, UK, 2000. View at Publisher · View at Google Scholar
  32. M. Hwang and X. Huang, “Shared-distribution hidden Markov models for speech recognition,” IEEE Transactions on Speech and Audio Processing, vol. 1, no. 4, pp. 414–420, 1993. View at Publisher · View at Google Scholar · View at Scopus
  33. L. R. Rabiner, “Tutorial on hidden Markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2, pp. 257–286, 1989. View at Publisher · View at Google Scholar · View at Scopus
  34. W. Khreich, E. Granger, A. Miri, and R. Sabourin, “On the memory complexity of the forward-backward algorithm,” Pattern Recognition Letters, vol. 31, no. 2, pp. 91–99, 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. “CASIS Iris Image Database Version 4.0—Biometrics Ideal Test (BIT),” http://www.idealtest.org/.
  36. D. Maltoni, Biometric Fusion, Handbook of Fingerprint Recognition, Springer, 2nd edition, 2009.
  37. M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991. View at Publisher · View at Google Scholar · View at Scopus
  38. A. Rogozan and P. S. Sathidevi, “Static and dynamic features for improved HMM based visual speech recognition,” in Proceedings of the 1st International Conference on Intelligent Human Computer Interaction, pp. 184–194, Allahabad, India, 2009.
  39. J. S. Lee and C. H. Park, “Adaptive decision fusion for audio-visual speech recognition,” in Speech Recognition, Technologies and Applications, F. Mihelic and J. Zibert, Eds., p. 550, Vienna, Australia, 2008. View at Google Scholar
  40. W. Feng, L. Xie, J. Zeng, and Z. Liu, “Audio-visual human recognition using semi-supervised spectral learning and hidden Markov models,” Journal of Visual Languages and Computing, vol. 20, no. 3, pp. 188–195, 2009. View at Publisher · View at Google Scholar · View at Scopus
  41. N. Wanas, Feature based architecture for decision fusion [Ph.D. Thesis dissertation], Department of Systems Design Engineering, University of Waterloo, Ontario, Canada, 2003.
  42. R. Battiti and A. M. Colla, “Democracy in neural nets: voting schemes for classification,” Neural Networks, vol. 7, no. 4, pp. 691–707, 1994. View at Publisher · View at Google Scholar · View at Scopus
  43. C. Ji and S. Ma, “Combinations of weak classifiers,” IEEE Transactions on Neural Networks, vol. 8, no. 1, pp. 32–42, 1997. View at Publisher · View at Google Scholar · View at Scopus
  44. L. Lam and C. Y. Suen, “Optimal combinations of pattern classifiers,” Pattern Recognition Letters, vol. 16, no. 9, pp. 945–954, 1995. View at Publisher · View at Google Scholar · View at Scopus
  45. L. Xu, A. Krzyzak, and C. Y. Suen, “Methods of combining multiple classifiers and their applications to handwriting recognition,” IEEE Transactions on Systems, Man and Cybernetics, vol. 22, no. 3, pp. 418–435, 1992. View at Publisher · View at Google Scholar · View at Scopus
  46. L. I. Kuncheva, “A theoretical study on six classifier fusion strategies,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 281–286, 2002. View at Publisher · View at Google Scholar · View at Scopus
  47. P. Munro and B. Parmanto, “Competition among networks improves committee performance,” in Advances in Neural Information Processing Systems, M. C. Mozer, M. I. Jordan, and T. Petsche, Eds., vol. 9, pp. 592–598, The MIT Press, Cambridge, Mass, USA, 1997. View at Google Scholar
  48. D. M. J. Tax, M. van Breukelen, and R. P. W. Duin, “Combining multiple classifiers by averaging or by multiplying?” Pattern Recognition, vol. 33, no. 9, pp. 1475–1485, 2000. View at Publisher · View at Google Scholar · View at Scopus
  49. T. K. Ho, J. J. Hull, and S. N. Srihari, “Decision combination in multiple classifier systems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 1, pp. 66–75, 1994. View at Publisher · View at Google Scholar · View at Scopus
  50. S. Cho and J. H. Kim, “Combining multiple neural networks by fuzzy integral for robust classification,” IEEE Transactions on Systems, Man and Cybernetics, vol. 25, no. 2, pp. 380–384, 1995. View at Publisher · View at Google Scholar · View at Scopus
  51. L. Kuncheva, “An application of owa operators to the aggregation of multiple classification decisions,” in The Ordered Weighted Averaging Operators, Theory and Applications, R. Yager and J. Kacprzyk, Eds., pp. 330–343, Kluwer Academic, Dordrecht, The Netherlands, 1997. View at Google Scholar
  52. L. Kuncheva, J. C. Bezdek, and M. A. Sutton, “On combining multiple classifiers by fuzzy templates,” in Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS '98), pp. 193–197, IEEE, Pensacola Beach, Fla, USA, August 1998. View at Publisher · View at Google Scholar
  53. J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas, “On combining classifiers,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 3, pp. 226–239, 1998. View at Publisher · View at Google Scholar · View at Scopus
  54. M. Ceccarelli and A. Petrosino, “Multi-feature adaptive classifiers for SAR image segmentation,” Neurocomputing, vol. 14, no. 4, pp. 345–363, 1997. View at Publisher · View at Google Scholar · View at Scopus