About this Journal Submit a Manuscript Table of Contents
Applied Computational Intelligence and Soft Computing
Volume 2013 (2013), Article ID 515918, 11 pages
Research Article

A Novel Algorithm for Feature Level Fusion Using SVM Classifier for Multibiometrics-Based Person Identification

1Department of Computer Technology, Yeshwantrao Chavan College of Engineering, Nagpur 441110, India
2Department of Computer Engineering, Sardar Vallabhbhai National Institute of Technology, Surat, India
3Nagar Yuwak Shikshan Sanstha, Nagpur, India

Received 1 April 2013; Revised 16 June 2013; Accepted 17 June 2013

Academic Editor: Zhang Yi

Copyright © 2013 Ujwalla Gawande et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Recent times witnessed many advancements in the field of biometric and ultimodal biometric fields. This is typically observed in the area, of security, privacy, and forensics. Even for the best of unimodal biometric systems, it is often not possible to achieve a higher recognition rate. Multimodal biometric systems overcome various limitations of unimodal biometric systems, such as nonuniversality, lower false acceptance, and higher genuine acceptance rates. More reliable recognition performance is achievable as multiple pieces of evidence of the same identity are available. The work presented in this paper is focused on multimodal biometric system using fingerprint and iris. Distinct textual features of the iris and fingerprint are extracted using the Haar wavelet-based technique. A novel feature level fusion algorithm is developed to combine these unimodal features using the Mahalanobis distance technique. A support-vector-machine-based learning algorithm is used to train the system using the feature extracted. The performance of the proposed algorithms is validated and compared with other algorithms using the CASIA iris database and real fingerprint database. From the simulation results, it is evident that our algorithm has higher recognition rate and very less false rejection rate compared to existing approaches.

1. Introduction

With the advantage of reliability and stability, biometric recognition has been developing rapidly for security and personal identity recognition. Protecting resources from an intruder is a crucial problem for the owner. The multimodal biometric system integrates more biometrics to improve security and accuracy and hence is capable of handling more efficiently the nonuniversality problem of human traits. In fact, it is very common to use a variety of biological characteristics for identification. In fact, it is very common to use a variety of biological characteristics for identification because different biological characteristics are knowingly/unknowingly used by people to identify a person.

Fusion of multiple biometric traits provides more useful information compared to that obtained using unimodal biometric trait. Use of different feature extraction techniques from each modality possibly covers some features those are not captured by the first method. Supplementary information on the same identity helps in achieving high performance [1]. Prevailing practices in multimodal fusion are broadly categorized as prematching and postmatching fusion [2]. Feature level fusion is prematching activity. Fusion at the feature level includes the incorporation of feature sets relating to multiple modalities. The feature set holds richer information about the raw biometric data than the match score or the final decision. Integration at feature level is expected to offer good recognition results. But fusion at this level is hard to accomplish. The information obtained from different modalities may be so heterogeneous that process of fusion is not so easy. The sequential and parallel feature fusion results in high-dimensional feature vector. The lesser the number of biometric traits, the lesser the time required for future fusion. The more number of traits covers more information. To achieve an increased recognition rate with reduced processing time is the primary objective of this work (feature fusion). The time required to fuse the feature vectors from different traits is directly proportional to the number of modalities. In the light of these facts it is better to have as less traits as possible, but the fusion of the features extracted from these traits must provide a higher recognition rate for having successful usage of multiple biometrics for recognition system compared to the unimodal-based system.

Fingerprint and iris are widely used biometric traits. In this work these traits are used for feature level fusion. Haar wavelet is known for its reduced computational complexity. The Haar wavelet technique is employed for feature extraction in both the cases. A new method of feature level fusion is proposed, implemented, and tested with this work. The viewpoint in this fusion is obtaining relatively better performance than the unimodal features. The proposed algorithm fuses the features of individual modalities accurately and efficiently. This is evident from simulation results. Ongoing similar research in this (multimodal) area has been focused on postclassification or matching score-level biometric fusion because of its simplicity [3, 4]. However the large-scale biometric identification applications still require performance improvements. Identification is more computational and time demanding application than the identity verification. Therefore a more specialized classification-based biometric system should be approached in order not only to achieve the desired performance improvement, but also to decrease the execution time [1, 2]. Commonly used classifiers for different biometrics are support vector machines (SVMs) with different kernels (especially Gaussian and polynomials), Gaussian mixture models-based classifiers, neural networks and multilayer perceptron [58]. Most of them provided significant performance improvements, but their results are strongly dependent on the available datasets [6]. Feature level fusion trained by the SVM-based classifier is used in the proposed work for evaluating the performance of the proposed system. The idea behind the SVM is to map the input vectors into a high-dimensional feature space using the “kernel trick” and then to construct a linear decision function in this space so that the dataset becomes separated with a maximum margin [69]. Quite often numbers of feature extraction techniques implemented by multimodal systems are generally equal to the number of biometrics under consideration. Use of single feature extraction technique for extracting features of both contributing traits makes the framework stronger. This research work aims at reducing the false rejection, false acceptance, and training and testing time for reliable recognition.

The remainder of this paper is organized as follows. Section 2 briefly presents the review of existing multimodal biometric systems. Section 3 describes the extraction of features from fingerprint and iris using Haar wavelet-based technique, their fusion using novel algorithm, and classification using SVM. Section 4 presents results of experimentation. Section 5 summarizes the recognition performance of the proposed algorithms with existing recognition and fusion algorithms. Finally Section 6 concludes our research and proposes a scope for future work.

2. Literature Review

In recent times, multimodal biometric systems have attracted the attention of researchers. A variety of articles can be found, which propose different approaches for unimodal and multimodal biometric systems which are [1, 2, 1014]. Multimodal fusion has the synergic effect enhancing the value of information. First multimodal biometric was proposed by Jain and Ross 2002 [10]. Later, many scientists thrive for plenty of research in multimodal biometric system. A great deal of academic research was devoted to it. A number of published works demonstrate that the fusion process is effective, because fused scores provide much better discrimination than individual scores [1, 2]. Voluminous literature deals with a variety of techniques making the features more informative [912]. Single input and multiple algorithms for feature extraction [9] or multiple samples and single feature extraction algorithm [15] or utilizing two or more different modalities [16] are commonly discussed in recent times. It was found in [17] that the empirical relation in multimodal biometrics can improve the performance. But these improvements involve the cost of multiple sensors or multiple algorithms, which ultimately reflects higher installation and operational cost.

In most cases, multimodal fusion can be categorized into two groups: prematching fusion and postmatching fusion. Fusion prior to matching integrates pieces of evidence before matching. It includes sensor level fusion [10] and feature level fusion [18, 19]. Fusion after matching integrates pieces of evidence after matching. It includes match score level [2022], rank level [23, 24], and decision level [3]. Fusion at the decision level is too rigid since only a limited amount of information is available at this level [25]. Integration at the matching score level is generally preferred due to the ease in accessing and combining matching scores. Since the matching scores generated by different modalities are heterogeneous, it is essential to normalize the scores before combining them. Normalization is computationally expensive [17]. Choosing inappropriate normalization technique may result in low recognition rate. For harmonious and effective working of the system, careful selection of different environments, different set of traits, different set of sensors, and so forth are necessary [17]. The features contain richer information about the input biometric data than the matching and decision scores. Integration at the feature level should provide better recognition results than other levels of integration [18, 19, 25]. However feature space and scaling differences make it difficult to homogenize features from different biometrics. Fused feature vector sometimes may result in increased dimension, compared to unimodal features. For example, fused vector is likely to end up with twice the dimension of unimodal features. For fusion to achieve the claimed performance enhancement, fusion rules must be chosen based on the type of application, biometric traits, and level of fusion. Biometric systems that integrate information at an early stage of processing are believed to be more effective than those systems which perform integration at a later stage.

In what follows, we introduce a brief review of some recent researches. The unimodal iris system, unimodal palmprint system, and multibiometric system (iris and palmprint) are presented in [20]. They worked on the matching score on the basis of similarity of the query feature vector with the template vector. Besbes et al. [3] proposed a multimodal biometric system using fingerprint and iris features. They use a hybrid approach based on (1) fingerprint minutiae extraction and (2) iris template encoding through a mathematical representation of the extracted Iris region. This approach was based on two recognition modalities, and every part provides its own decision. The final decision was obtained by ANDing the unimodal decision. Experimental verification of the theoretical claim is missing in their work. Aguilar et al. [26] worked on multibiometric using a combination of fast Fourier transform (FFT), and Gabor filters enhance fingerprint imaging. Successively, a novel stage for recognition using local features and statistical parameters was used. They used the fingerprints of both thumbs. Each fingerprint was separately processed, and the unimodal results were combined to obtain final fused result. Yang and Ma [22] used fingerprint, palm print, and hand geometry to implement personal identity verification. The three images were derived from the same image. They implemented matching score fusion to establish identity, performing first fusion of the fingerprint and palm-print features, and later a matching-score fusion between the multimodal system and the unimodal palm geometry. An approach suggested in [27] shows improved data fusion for face, fingerprint, and iris images. The approach was based on the eigenface and the Gabor wavelet methods, incorporating the advantages of the single algorithm. They recommended a new fusion system that exhibited improved performance. Baig et al. [4] worked on the state-of-the-art framework for multimodal biometric identification system. It is adaptable to any kind of biometric system. Faster processing was benefits of their system. A framework for fusion of the iris and fingerprint was developed for verification. Classification was based on single Hamming distance. An authentication method presented by Nagesh Kumar et al. [21] focuses on multimodal biometric system with two features, that is, face and palm print. Integrated feature vector resulted in the robustness of the person authentication. The final assessment was done by fusion at the matching score level. Unimodal scores were fused after matching. Maurer and Baker have presented a fusion architecture based on Bayesian belief networks for fingerprint and voice [28]. The features were modelled using statistical distributions.

A frequency-based approach resulting in a homogeneous biometric vector, integrating iris and fingerprint data, is worked out in [25]. Successively, a hamming-distance-based matching algorithm dealt with the unified homogenous biometric vector. Basha et al. [23] implemented fusion of iris and fingerprint. They used adaptive rank level fusion directly at verification stage. Ko [29] worked on the fusion of fingerprint, face, and iris. Various possibilities of multimodal biometric fusion and strategies were discussed for improving accuracy. Evaluation of image quality from different biometric and their influence on identification accuracy was also discussed. Jagadeesan et al. [18] prepared a secured cryptographic key on the basis of iris and fingerprint features. Minutiae points were extracted from fingerprint. Similarly texture properties were extracted from iris. Feature level fusion was further employed. 256-bit cryptographic key was the outcome of the fusion. Improvement in authentication and security, using 256-bit encryption, was claimed as a part of their result in [18]. A multimodal biometric-based encryption scheme was proposed by [19]. They combine features of the fingerprint and iris with a user-defined secret key.

As far as multimodal biometric-based encryption schemes are concerned, there are very few proposals in the literature. Nandakumar and Jain [30] proposed a fuzzy vault-based scheme that combines iris with fingerprints. As expected, the verification performance of the multi-biometric vault was better than the unimodal biometric vaults. But the increase in entropy of the keys is only from 40 bits (for individual fingerprint and iris modalities) to 49 bits (for the multibiometric system). Nagar et al. [31] proposed a feature-level fusion framework to simultaneously protect multiple templates of a user as a single secure sketch. They implemented framework using two well-known biometric cryptosystems, namely, fuzzy vault and fuzzy commitment.

In contrast to the approaches found in the literature and detailed earlier, the proposed approach introduces an innovative idea to fuse homogenize size feature vector of fingerprint and iris at the feature level. A fusion process implemented in this work is based on the Mahalanobis distance. This fusion has the advantage of reduction in fused feature vector size, which is the main issue (of high dimensions) in feature level fusion. Our proposed approach of feature fusion outperforms the suggested technique by [3] and related comparisons against the unimodal elements. Extraction of features from multimodalities, their fusion using distinct process, and classification using SVM are the core of this work. The proposed fusion method is not used earlier in any research. The novelty of the work is to create a single template from two biometric modalities and the use of SVM for recognition purpose.

3. Proposed Multimodal Biometric System

The proposed approach implements an innovative idea to fuse the features of two different modalities—fingerprint and iris. The proposed system functions are grouped in the following basic stages:(i)preprocessing stage which obtains region of interest (ROI) for further processing;(ii)feature extraction stage which provides the resulting features using Haar wavelet from ROI;(iii)fusion stage which combined the corresponding feature vectors of unimodal. The features so obtained are fused by innovative technique using the Mahalanobis distance. These distances are normalized using hyperbolic and are fused by applying the average sum rule tanh.(iv)the classification stage which operates on fingerprint feature vector, iris feature vector, and fused feature vector for an entire subject separately. We applied an SVM strategy for classification to get the results of acceptance/rejection or identification decision.

These techniques are explained as follows.

3.1. Fingerprint Feature Extraction

Fingerprint image is made available from the image acquisition system. Real fingerprint images are rarely of perfect quality. They may be degraded by noise due to many factors including variations in skin, impression condition, and noise of capturing devices. Quality of some images is really bad; preprocessing improves them to the extent required by feature extraction. Basically fingerprint recognition is performed by minutiae-based method and image-based method. Most of the minutiae-based methods require extensive preprocessing operations such as normalization, segmentation, orientation estimation, ridge filtering, binarization, and thinning [32]. Minutiae detection is then performed to reliably extract the minutiae features. For extracting the features of fingerprint we used a Haar wavelet-based technique. This image-based technique has its roots in frequency analysis (wavelet). An important benefit of this technique is less computational time, thereby making it more suitable for real-time applications. This technique requires less preprocessing and works fine even with low-quality images.

In this paper, the fingerprint image was first preprocessed using normalization, histogram equalization, and average filtering, to improve the image quality. Normalization of the input fingerprint image has prespecified mean and variance. Histogram equalization (HE) technique has the basic idea to map the gray levels based on the probability distribution of the input gray levels. HE flattens and stretches the dynamic range of the image’s histogram [11]. This results in the overall contrast improvement of the image as shown in Figure 1. It produces an enhanced fingerprint image that is useful for feature extraction.

Figure 1: Preprocessing of Fingerprint Image (a) Input Fingerprint Image. (b) Histogram equalized Image.
3.1.1. Haar Wavelet-Based Technique for Extracting Fingerprint Features

The wavelet transform is a mathematical tool based on many-layer function decomposition. After applying wavelet transform, a signal can be described by many wavelet coefficients which represent the characteristics of the signal. If the image has distinct features with some frequency and direction, the corresponding subimages have larger energies in wavelet transform. For this reason wavelet transform has been widely used in signal processing, pattern, recognition and texture recognition, [33]. By applying wavelet transform, vital information of original image is transformed into a compressed image without much loss of information. Haar wavelet decomposed the fingerprint image by mean and deviation. Wavelet coefficients of the image consist of four subbands each with a quarter of the original area. The up left subimage, composed of the low-frequency parts for both row and column, is called approximate image. The remaining three images, containing vertical high frequencies (down left), horizontal high frequencies (up right), and high frequencies in both directions (down right), are detailed images [34].

If represents an image signal, its Haar wavelet transform is equal to two 1D filters (-direction and -direction). As shown in Figure 2(a), where LL represents low-frequency vectors (approximate), HL represents high-frequency vectors in horizontal direction, LH represents high-frequency vectors in vertical direction, and HH represents diagonal high-frequency vectors. After first decomposition LL quarter, that is, approximate component, is submitted for next decomposition. In this manner the decomposition is carried out four times, as shown in Figure 2(b). This reduces array size by 1 × 60 along - as well as -direction. The original image of 160 × 96 is reduced to 10 × 6 after fourth decomposition. From this image a single 1 × 60 feature vector is extracted by row-wise serialization. This itself is treated as extracted feature vector for fingerprint.

Figure 2: (a) Wavelet decomposition and (b) 4-level wavelet decomposition.
3.2. Iris Feature Extraction

The process of extracting features from the iris image is discussed in this section. The iris image must be preprocessed before using it for the feature extraction purpose. The unwanted data including eyelid, pupil, sclera, and eyelashes in the image should be excluded. Therefore, the preprocessing module for iris is required to perform iris segmentation, iris normalization and image enhancement. Segmentation is an essential module to remove nonuseful information, namely, the pupil segment and the part outside the iris (sclera, eyelids, and skin). It estimates the iris boundary. For boundary estimation, the iris image is first fed to the canny algorithm which generates the edge map of the iris image. The detected edge map is then used to locate the exact boundary of pupil and iris using Hough transforms [12]. The next step is to separate eyelid and eyelashes. The horizontal segmentation operator and image binarization were used to extract the eyelid edge information. The eyelids span the whole image in the horizontal direction. The average of vertical gradients is larger in the area with eyelid boundary. The longest possible horizontal line is demarking line for eyelids. This is used as the separator, segmenting it into two parts. The eyelid boundaries were modelled with the parabolic curves according to the determined edge points. In the process of normalization the polar image of iris is translated into the Cartesian frame depicting it as rectangular strip as shown in Figure 3. It is done using Daugman’s rubber sheet model [13]. Finally, histogram equalization was used for image enhancement.

Figure 3: Iris Localization, segmentation, and normalization.
3.2.1. Haar Wavelet-Based Technique for Extracting Iris Features

The enhanced normalized iris image is then used to extract identifiable features of iris pattern, using three-level decomposition by Haar wavelet transform. Haar wavelet is selected in this work because of its ability of capturing approximate information along with retention of detailed texture. The normalized image of 240 × 20 is decomposed first along the row (for each row) and then the column (for each column). It produces four regions—left-up quarter (approximate component), right-up (horizontal component), bottom-left (vertical component), and bottom-right (diagonal component). The mean and deviations of the coefficients are available after decomposition. We again decompose approximate quarter into four subquarters. This decomposition is employed three times. Four sub images of size 30 × 2 are obtained. The approximate component after 3rd decomposition is representative feature vector. The first row of 1 × 30 is appended by second row of 1 × 30 for getting the feature vector of 1 × 60.

3.3. Fusion of Iris and Fingerprint Feature Vectors

The feature vectors extracted from encoded input images are further combined to the new feature vector by proposing a feature level fusion technique. The present work includes an innovative method of fusion at the feature level. The Mahalanobis distance technique is at the core of this fusion. This makes it distinct from different methods reported in the literature. The extracted features from fingerprint and iris are homogeneous; each vector is of size 1 × 60 elements. These two homogenous vectors are processed pragmatically to produce the fused vector of same order, that is, 1 × 60 elements. As most of the feature fusion in the literature is performed serially or parallel, it ultimately results in a high-dimensional vector. This is the major problem in feature level fusion. The proposed algorithm generates the same size fused vector as that of unimodal, and hence nullified the problem of high dimension. The fusion process is depicted in Figure 4 and is explained as follows.(1)Features fingerprint and iris of the query images are obtained.(2)The nearest match for query feature vectors of fingerprint and iris is selected from 4 × 100 reference feature vectors of fingerprints and iris using and Mahalanobis distance.(3)The Mahalanobis distance between a sample and a sample is calculated using the following equation: where is the within-group covariance matrix. In this paper, we assume a diagonal covariance matrix. This allows us to calculate the distance using only the mean and the variance. The minimum distance vector is considered as the most similar vector.(4)The difference between query and its most similar vector is computed for fingerprint and iris, which are again of size 1 × 60.(5)The elements of these difference vectors are normalized by Tanh to scale them between 0 and 1. Scaling of the participating (input) features to the same scale ensures their equal contribution to fusion.(6)The mean value is calculated for these two difference vectors for each component. This yields a new vector of 1 × 60. That itself is the fused feature vector.This fused vector is used for training using SVM. This method saves training and testing time, while retaining the benefits of fusion.

Figure 4: Architecture for feature fusion.
3.4. Support Vector Machine Classifier

SVM has demonstrated superior results in various classification and pattern recognition problems [35, 36]. Furthermore, for several pattern classification applications, SVM has already been proven to provide better generalization performance than conventional techniques especially when the number of input variables is large [37, 38]. With this purpose in mind, we evaluated the SVM for our fused feature vector.

The standard SVM takes a set of input data. It is a predictive algorithm to pinpoint the class to which the input belongs. This makes the SVM a nonprobabilistic binary linear classifier [39]. Given a set of training samples, each marked as belonging to its categories, an SVM training algorithm builds a model that assigns new sample to one category or another. For this purpose we turn to SVM for validating our approach. To achieve better generalization performance of the SVM, original input space is mapped into a high-dimensional dot product space called the feature space, and in the feature space the optimal hyperplane is determined as shown in Figure 5. The initial optimal hyperplane algorithm proposed by Vapnik [40] was a linear classifier. Yet, Boser et al. [39] suggested a way to create nonlinear classifiers by applying the kernel trick to extend the linear learning machine to handle nonlinear cases. We aimed to maximize the margin of separation between patterns to have a better classification result. The function that returns a dot product of two mapped patterns is called a kernel function. Different kernels can be selected to construct the SVM. The most commonly used kernel functions are the polynomial, linear, and Gaussian radial basis kernel functions (RBF). We employed two types of kernels for experimentation, namely radial basis function kernel and polynomial kernel.

Figure 5: Linearly separable patterns.

The separating hyperplane (described by ) is determined by minimizing the structural risk instead of the empirical error. Minimizing the structural risk is equivalent to seeking the optimal margin between two classes. The optimal hyperplane can be written as a combination of a few feature points, which are called support vectors of the optimal hyperplane. SVM training can be considered as the constrained optimization problem which maximizes the width of the margin and minimizes the structural risk: where is the bias, is the trade-off parameter, is the slack variable, which measures the deviation of a data point from the ideal condition of pattern, and is the feature vector in the expanded feature space. The penalty parameter “” controls the trade-off between the complexity of the decision function and the number of wrongly classified testing points. The correct value cannot be known in advance, and a wrong choice of the SVM penalty parameter can lead to a severe loss in performance. Therefore the parametric values are usually estimated from the training data by cross-validation and exponentially growing sequences of : where is the number of SVs, is the target value of learning pattern , and is the Lagrange multiplier value of the SVs. Given a set of training samples, each marked as belonging to its categories, an SVM training algorithm builds a model that assigns new sample to one of the two available zones.

Classification of the test sample is performed by where is the functional kernel that maps the input into higher-dimensional feature space. Computational complexity and classification time for the SVM classifiers using nonlinear kernels depend on the number of support vectors (SVs) required for the SVM.

The effectiveness of SVM depends on the kernel used, kernel parameters, and a proper soft margin or penalty value [4143]. The selection of a kernel function is an important problem in applications, although there is no theory to tell which kernel to use. In this work two types of kernels are used for experimentation, namely, radial basis function kernel and Polynomial kernel.The RBF requires less parameters to set than a polynomial kernel. However, convergence for RBF kernels takes longer than for the other kernels [44].

RBF kernel has the Gaussian form of where is a constant that defines the kernel width.

The polynomial kernel function is described as where is a positive integer, representing the constant of polynomial degree.

Before training, for SVM radial basis function, two parameters need to be found, and . is the penalty parameter of the error term and is a kernel parameter. By cross-validation, the best and values are found to be 8.0 and 0.1, respectively. Similarly polyorder value needs to be found for , for polynomial SVM. After regress parameter tuning the best found was 8.0. Research has found that the RBF kernel is superior to the polynomial kernel for our proposed feature level fusion method.

4. Experimental Results

Evaluation of proposed algorithm is carried out in three different sets of experiments. The database is generated for 100 genuine and 50 imposter sample cases. Four images of iris as well as fingerprint are stored for each person in the reference database. Similarly one image of each iris and fingerprint for all 100 cases is stored in query database. In the same manner data for 50 cases in reference and 50 cases in the query is appended for imposters. Fingerprint images are real and iris images are from CASIA database. The mutual independence assumption of the biometric traits allows us to randomly pair the users from the two datasets. First, we report all the experiments, based on the unimodal. Then, the remaining experiments are related to the fused pattern classification strategy. Two kernels, RBFSVM and PolySVM, are used for classification in separate experiments. Three types of feature vectors, first from fingerprints, second from iris, and third as their fusion, are used as inputs in separate experimental setups. Combination of input features with classifier makes six different experiments. The objective of all these experiments is to provide a good basis for comparison. By using kernels the classification performance or the data separability in the high-dimensional space gets improved. Since this is a pattern recognition approach, the database is divided into two parts: training and testing. This separation of the database into training and test sets was used for finding the average performance rating. For unimodal biometric recognition 4 features per user were used in training and one image per user is used for testing the performance of the system. For multimodal, four fused features per user were used for training and one is used for testing. This separation of the database into training and test sets was used for finding the average performance results of genuine acceptance rate (GAR), false acceptance rate (FAR), training and testing time.

The results obtained are tabulated in Table 1. The results indicate that the average rate of correct classification for fused vector is found to be 94%, in case of RBF kernel for 100 classes. The GAR of 87% and 88% for unimodal fingerprint and iris, respectively, is obtained. Also the average rate of correct classification for fused vector is found to be 93%, in case of polykernel for the same dataset. It is found to be 85% and 84% for unimodal fingerprint and iris, respectively. False acceptance rate (FAR) reached atoneable minima, that is, 0, using fused feature vector with both kernel classifiers. We can also observe that the RBF-SVM with fused data is the best combination for both GAR and FAR.

Table 1: Average GAR (%), FAR (%), and response time (mean) in seconds for unimodal and multimodal techniques.

Moreover, the results reported here will help us to better assess the results produced by unimodal, and multimodal in terms of training and testing time. Time in seconds required for this training and testing for unimodal and multimodal is also tabulated in Table 1. The best performing method, that is, fused feature, required training time of 3.27 s by RBFSVM and 3.98 s by PolySVM. The minimum time required for training is for iris by Haar wavelet-based method (3.25 s). The results revealed that the mean training time for RBFSVM, and for fused data is slightly greater than unimodal iris. Fused feature vector outperforms unimodal features in both cases for testing. It took 0.12 s using RBFSVM and 0.19 s using PolySVM. Our experimental results provide adequate support for recommending fused feature vector with RBFSVM as the best combination for high recognition. It has highest GAR of 94% and FAR of 0%. It has least testing time of 0.12 s. This combination is the best performer amongst six different experiments carried out. The GAR of 93% and FAR of 0% with testing time of 0.19 s for PolySVM is the second best performer. The training time for fused data lacks behind unimodal iris, but it should be tolerated as the training is to be carried out only once. From Table 1, it can be easily concluded that feature level performs better compared to unimodal biometric systems.

5. Discussion and Comparison

This paper presents a feature level fusion method for a multimodal biometric system based on fingerprints and irises. The proposed approach for fingerprint and iris feature extraction, fusion, and classification by RBFSVM and PolySVM has been tested for unimodal as well as multimodal identification systems using the real fingerprint database and CASIA iris database. In greater detail, the proposed approach performs fingerprint and iris feature extraction using the Haar wavelet based method. These codified features are the representation of unique template. We compare our approach with Besbes et al. [3], Jagadeesan et al. [18], and Conti et al. [25].

Besbes et al. [3] worked on the same modalities (fingerprint and iris), but fusion is at the decision level. They claimed the improvement in results because of their fusion. The claim is not supported by experimental findings. The procedure followed by them is explained here. Features of fingerprint were extracted by minutiae-based method. The iris pattern was encoded by 2D Gabor filter. Matching was done by hamming distance. Each modality is processed separately to obtain its decision. The final decision of the system uses the operator “AND” between decision coming from the fingerprint recognition step and that coming from the iris recognition one. This ANDing operation is fusion at decision level. Hence, nobody can be accepted unless both of the results are positive. To compare their approach with our work, we fused their features sequentially and train these vectors by RBFSVM and PolySVM for two separate experiments.

Similar attempt made by Jagadeesan et al. [18] extracts the features, minutiae points, and texture properties from the fingerprint and iris images, respectively. Then, the extracted features were combined together with their innovative method, to obtain the fused multi-biometric template. A 256-bit secure cryptographic key was generated from the multi-biometric template. For experimentation, they employed the fingerprint images obtained from publicly available sources (so we used our real fingerprint database) and the Iris images from CASIA iris database. Training and testing aspects were not covered in their literature. In our work we implemented this system separately and train by SVM, for comparison.

The feature level fusion proposed by Conti et al. [25] was based on frequency-based approach. Their generated fused vector was homogeneous biometric vector, integrating iris and fingerprint data. A hamming-distance-based matching algorithm was used for final decision. To compare our approach we train their fused vector with SVM. Results obtained for FAR and FRR are now available for comparison. We tested their technique on our database for 100 genuine and 50 imposter cases. The comparison chart of FAR and FRR and response time are tabulated in Table 2. Improved performance is exhibited by proposed feature level fusion compared to the fusion methods used by other researchers. From Table 2, it can be easily concluded that feature level performs better compared to unimodal biometric systems.

Table 2: Comparison of the proposed approach with Besbes et al. [3], Jagadeesan et al. [18], and Conti et al. [25] (FAR, FRR, and response time (mean) in seconds).

This section is the comparison of results between our proposed method and the approach recommended by [3, 18, 25]. Here the results of feature level fusion are compared. Results indicate GAR of 80% at the decision level fusion by ANDing of unimodal decision. By fusing their features sequentially and training by RBFSVM, we found the results of GAR = 90%, FAR = 4%, training time of 4.3 s, and testing time of 0.20 s. Also, we obtained the results of GAR = 90%, FAR = 4%, training time of 4.98 s, and testing time of 0.24 s for polySVM. On the basis of the experimental results it is proved that the feature level fusion is better than the decision level fusion for the concept proposed by [3]. The results also reveal that the feature level fusion gives better performance than the individual traits. The fused feature vector of Jagadeesan et al. [18] achieves 0% FAR and 9% FRR with both kernel of SVM. The training time of 4.98 s and testing time of 0.20 s are required by RBFSVM and training time of 5.01 s and testing time of 0.21 s are required by PolySVM in [18] for fused vector of one subject. Also, the results obtained by Conti et al. [25] achieved 0% FAR by both kernels of SVM. The FRR of 8% and 9% is obtained by RBFSVM and PolySVM, respectively. The mean training time for training the fused feature sets of [25] is 4.58 s and 4.69 s by RBFSVM and PolySVM, respectively. The testing time required by fused vector is 0.19 s by RBFSVM and 0.22 s by PolySVM of of [25]. In our case, GAR is 94%, FAR is 0%, training time is 3.27 s and testing time is 0.12 s by RBFSVM. Also GAR is 93%, FAR is 0%, training time is 3.98 s, and testing time is 0.19 s by PolySVM. The recognition rates and error rates comparison of the proposed approach with [3, 18, 25] is shown in Figures 6 and 7. Results obtained from our method of fusion are superior to those of Besbes et al. [3], Jagadeesan et al. [18], and Conti et al. [25]. The results obtained by our method are best competing the results of the reference work for all the four performance indicators. Of the two classifiers used in our work RBFSVM stands superior to PolySVM. Experimental results substantiate the superiority of feature level fusion and our proposed method over the approach of [3, 18, 25].

Figure 6: Comparison of recognition rates.
Figure 7: Comparison of error rates.

6. Conclusion

Content of information is rich with multimodal biometrics. It is the matter of satisfaction that multimodalities are now in use for few applications covering large population. In this paper a novel algorithm for feature level fusion and recognition system using SVM has been proposed. Intention of this work were to evaluate the standing of our proposed method of feature level fusion using the Mahalanobis distance technique. These fused features are trained by RBFSVM and PolySVM separately. The simulation results show clearly the advantage of feature level fusion of multiple biometric modalities over single biometric feature identification. The superiority of feature level fusion is concluded on the basis of experimental results for FAR, FAR, and training and testing time. Of the two RBFSVM performs better than PolySVM. The uniqueness of fused template generated also outperforms the decision level fusion. The improvement in performance of FAR, FRR, and response time is observed as compared to existing researches. From the experimental results it can be concluded that the feature level fusion produces better recognition than individual modalities. The proposed method sounds strong enough to enhance the performance of multimodal biometric. The proposed methodology has the potential for real-time implementation and large population support. The work can also be extended with other biometric modalities also. The performance analysis using noisy database may be performed.


  1. S. Soviany and M. Jurian, “Multimodal biometric securing methods for informatic systems,” in Proceedings of the 34th International Spring Seminar on Electronic Technology, pp. 12–14, Phoenix, Ariz, USA, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. S. Soviany, C. Soviany, and M. Jurian, “A multimodal approach for biometric authentication with multiple classifiers,” in International Conference on Communication, Information and Network Security, pp. 28–30, 2011.
  3. F. Besbes, H. Trichili, and B. Solaiman, “Multimodal biometric system based on fingerprint identification and iris recognition,” in Proceedings of the 3rd International Conference on Information and Communication Technologies: From Theory to Applications (ICTTA '08), pp. 1–5, Damascus, Syria, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. A. Baig, A. Bouridane, F. Kurugollu, and G. Qu, “Fingerprint—Iris fusion based identification system using a single hamming distance matcher,” International Journal of Bio-Science and Bio-Technology, vol. 1, no. 1, pp. 47–58, 2009. View at Scopus
  5. L. Zhao, Y. Song, Y. Zhu, C. Zhang, and Y. Zheng, “Face recognition based on multi-class SVM,” in IEEE International Conference on Control and Decision, pp. 5871–5873, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. T. C. Mota and A. C. G. Thomé, “One-against-all-based multiclass svm strategies applied to vehicle plate character recognition,” in IEEE International Joint Conference on Neural Networks (IJCNN '09), pp. 2153–2159, New York, NY, USA, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. R. Brunelli and D. Falavigna, “Person identification using multiple cues,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 10, pp. 955–966, 1995. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Theodoridis and K. Koutroumbas, Pattern Recognition, Elsevier, 4th edition, 2009.
  9. B. Gokberk, A. A. Salah, and L. Akarun, “Rank-based decision fusion for 3D shape-based face recognition,” in Proceedings of the 13th IEEE Conference on Signal Processing and Communications Applications, pp. 364–367, Antalya, Turkey, 2005.
  10. A. Jain and A. Ross, “Fingerprint mosaicking,” in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '02), pp. IV/4064–IV/4067, Rochester, NY, USA, May 2002. View at Scopus
  11. M. Sepasian, W. Balachandran, and C. Mares, “Image enhancement for fingerprint minutiae-based algorithms using CLAHE, standard deviation analysis and sliding neighborhood,” in IEEE Transactions on World Congress on Engineering and Computer Science, pp. 1–6, 2008.
  12. B. S. Xiaomei Liu, Optimizations in iris recognition [Ph.D. thesis], Computer Science and Engineering Notre Dame, Indianapolis, Ind, USA, 2006.
  13. J. Daugman, “How iris recognition works,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21–30, 2004. View at Publisher · View at Google Scholar · View at Scopus
  14. L. Hong and A. Jain, “Integrating faces and fingerprints for personal identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1295–1307, 1998. View at Publisher · View at Google Scholar · View at Scopus
  15. J. Fierrez-aguilar, L. Nanni, J. Ortega-garcia, and D. Maltoni, “Combining multiple matchers for fingerprint verification: a case study,” in Proceedings of the International Conference on Image Analysis and Processing (ICIAP '05), vol. 3617 of Lecture Notes in Computer Science, pp. 1035–1042, Springer, 2005.
  16. A. Lumini and L. Nanni, “When fingerprints are combined with Iris—a Case Study: FVC2004 and CASIA,” International Journal of Network Security, vol. 4, no. 1, pp. 27–34, 2007.
  17. L. Hong, A. K. Jain, and S. Pankanti, “Can multibiometrics improve performance?” in IEEE Workshop on Automatic Identification Advanced Technologies, pp. 59–64, New Jersey, NJ, USA, 1999.
  18. A. Jagadeesan, T. Thillaikkarasi, and K. Duraiswamy, “Protected bio-cryptography key invention from multimodal modalities: feature level fusion of fingerprint and Iris,” European Journal of Scientific Research, vol. 49, no. 4, pp. 484–502, 2011. View at Scopus
  19. I. Raglu and P. P. Deepthi, “Multimodal Biometric Encryption Using Minutiae and Iris feature map,” in Proceedings of IEEE Students’International Conference on Electrical, Electronics and Computer Science, pp. 94–934, Zurich, Switzerland, 2012.
  20. V. C. Subbarayudu and M. V. N. K. Prasad, “Multimodal biometric system,” in Proceedings of the 1st International Conference on Emerging Trends in Engineering and Technology (ICETET '08), pp. 635–640, Nagpur, India, July 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Nagesh kumar, P. K. Mahesh, and M. N. Shanmukha Swamy, “An efficient secure multimodal biometric fusion using palmprint and face image,” International Journal of Computer Science, vol. 2, pp. 49–53, 2009.
  22. F. Yang and B. Ma, “A new mixed-mode biometrics information fusion based-on fingerprint, hand-geometry and palm-print,” in Proceedings of the 4th International Conference on Image and Graphics (ICIG '07), pp. 689–693, Jinhua, China, August 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. A. J. Basha, V. Palanisamy, and T. Purusothaman, “Fast multimodal biometric approach using dynamic fingerprint authentication and enhanced iris features,” in Proceedings of the IEEE International Conference on Computational Intelligence and Computing Research (ICCIC '10), pp. 1–8, Coimbatore, India, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. M. M. Monwar and M. L. Gavrilova, “Multimodal biometric system using rank-level fusion approach,” IEEE Transactions on Systems, Man and Cybernetics B, vol. 39, no. 4, pp. 867–878, 2009. View at Scopus
  25. V. Conti, C. Militello, F. Sorbello, and S. Vitabile, “A frequency-based approach for features fusion in fingerprint and iris multimodal biometric identification systems,” IEEE Transactions on Systems, Man and Cybernetics C, vol. 40, no. 4, pp. 384–395, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. G. Aguilar, G. Sánchez, K. Toscano, M. Nakano, and H. Pérez, “Multimodal biometric system using fingerprint,” in International Conference on Intelligent and Advanced Systems (ICIAS '07), pp. 145–150, Kuala Lumpur, Malaysia, November 2007. View at Publisher · View at Google Scholar · View at Scopus
  27. L. Lin, X.-F. Gu, J.-P. Li, L. Jie, J.-X. Shi, and Y.-Y. Huang, “Research on data fusion of multiple biometric features,” in International Conference on Apperceiving Computing and Intelligence Analysis (ICACIA '09), pp. 112–115, Chengdu, China, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. D. E. Maurer and J. P. Baker, “Fusing multimodal biometrics with quality estimates via a Bayesian belief network,” Pattern Recognition, vol. 41, no. 3, pp. 821–832, 2008. View at Publisher · View at Google Scholar · View at Scopus
  29. T. Ko, “Multimodal biometric identification for large user population using fingerprint, face and IRIS recognition,” in Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop (AIPR '05), pp. 88–95, Arlington, Va, USA, 2005.
  30. K. Nandakumar and A. K. Jain, “Multibiometric template security using fuzzy vault,” in IEEE 2nd International Conference on Biometrics: Theory, Applications and Systems, pp. 198–205, Washington, DC, USA, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  31. A. Nagar, K. Nandakumar, and A. K. Jain, “Multibiometric cryptosystems based on feature-level fusion,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 1, pp. 255–268, 2012. View at Publisher · View at Google Scholar · View at Scopus
  32. K. Balasubramanian and P. Babu, “Extracting minutiae from fingerprint images using image inversion and bi-histogram equalization,” in SPIT-IEEE Colloquium and International Conference on Biometrics, pp. 53–56, Washington, DC, USA, 2009.
  33. Y.-P. Huang, S.-W. Luo, and E.-Y. Chen, “An efficient iris recognition system,” in Proceedings of the International Conference on Machine Learning and Cybernetics, pp. 450–454, Beijing, China, November 2002. View at Scopus
  34. N. Tajbakhsh, K. Misaghian, and N. M. Bandari, “A region-based Iris feature extraction method based on 2D-wavelet transform,” in Bio-ID-MultiComm, 2009 joint COST 2101 and 2102 International Conference on Biometric ID Management and Multimodal Communication, vol. 5707 of Lecture Notes in Computer Science, pp. 301–307, 2009.
  35. D. Zhang, F. Song, Y. Xu, and Z. Liang, Advanced Pattern Recognition Technologies with Applications to Biometrics, Medical Information Science Reference, New York, NY, USA, 2008.
  36. C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, 1998. View at Scopus
  37. M. M. Ramon, X. Nan, and C. G. Christodoulou, “Beam forming using support vector machines,” IEEE Antennas and Wireless Propagation Letters, vol. 4, pp. 439–442, 2005.
  38. M. J. Fernández-Getino García, J. L. Rojo-Álvarez, F. Alonso-Atienza, and M. Martínez-Ramón, “Support vector machines for robust channel estimation in OFDM,” IEEE Signal Processing Letters, vol. 13, no. 7, pp. 397–400, 2006. View at Publisher · View at Google Scholar · View at Scopus
  39. B. E. Boser, I. M. Guyon, and V. N. Vapnik, “Training algorithm for optimal margin classifiers,” in Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory (COLT '92), pp. 144–152, Morgan Kaufmann, San Mateo, Calif, USA, July 1992. View at Scopus
  40. V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1999.
  41. C.-W. Hsu and C.-J. Lin, “A simple decomposition method for support vector machines,” Machine Learning, vol. 46, no. 1–3, pp. 291–314, 2002. View at Publisher · View at Google Scholar · View at Scopus
  42. J. Bhatnagar, A. Kumar, and N. Saggar, “A novel approach to improve biometric recognition using rank level fusion,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–6, Hong Kong, China, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  43. J. Manikandan and B. Venkataramani, “Evaluation of multiclass support vector machine classifiers using optimum threshold-based pruning technique,” IET Signal Processing, vol. 5, no. 5, pp. 506–513, 2011. View at Publisher · View at Google Scholar · View at Scopus
  44. R. Noori, M. A. Abdoli, A. Ameri Ghasrodashti, and M. Jalili Ghazizade, “Prediction of municipal solid waste generation with combination of support vector machine and principal component analysis: a case study of mashhad,” Environmental Progress and Sustainable Energy, vol. 28, no. 2, pp. 249–258, 2009. View at Publisher · View at Google Scholar · View at Scopus