Abstract

We have addressed a novel watermarking algorithm to support the capacity demanded by the multimodal biometric templates. Proposed technique embeds watermark in low frequency AC coefficients of selected 8 × 8 DCT blocks. Selection of blocks accomplishes perceptual transparency by exploiting the masking effects of human visual system (HVS). Embedding is done by modulating the coefficient magnitude as a function of its estimated value. Neighborhood estimation is used for the weighted DC coefficients from eight neighboring DCT blocks. The weights of the DC coefficients are calculated from local image intrinsic property. For our experimentation we have used iris and finger prints as the two templates which are watermarked into standard test images. The robustness of the proposed algorithm is compared with the few state-of-the-art literature when watermarked image is subjected to common channel attacks.

1. Introduction

With the current advances in information communication, world-wide-web connectivity, the security and privacy issues for authentication have increased by many folds. Applications such as electronic banking, e-commerce, m-commerce, ATM, smart cards, and so forth require high attention of data security, either while data is stored in the database/token, or transmitted over the network. This makes implementation of automatic, robust, and secure person identification a hot research topic. Biometric recognition offers a consistent solution for the user authentication to identity management systems. One of the reasons for popularity of this biometric system is its ability to differentiate between authorized person and forger who might illegally attempt to access the privilege of authorized person [1].

System accuracy depends on how efficiently it accepts genuine user and decline imposter user. Acceptance or denial of the user is confirmed based on matching between live and template database. However, a single physical characteristic or behavioral trait of an individual sometimes fails to stand as sufficient for user identification/verification. For this reason systems with integration of two or more different biometrics are currently have derived attention for being designed and made inter-operative. This recent development can provide an acceptable performance to increase the reliability of decisions as well as increases robustness with regard to fraudulent technologies when used by even more than one billion of users. Further it also helps to reduce failure to enroll rate (FER) or failure to capture rate (FCR) [2].

In [3] authors point out that a biometrics based verification system works properly only if the verifier system gives guarantee that the biometric data came from the genuine person at the time of enrollment and protected from various attacks while transmitted from client to server (between the database center and matcher). Though a biometric system can sustain security, it is also susceptible to various types of threats [4, 5]. In [6] author produces a generic biometric system with eight possible hierarchical positions of threats. These threats can be from fake biometric (fake finger, a face mask, etc.), an old recorded signal (old copy of fingerprint, recorded audio signal of a speaker, etc.), a feature extractor could be forced to produce feature generated value chosen by attacker than that of the actual one, synthetic feature set, artificially match score, manipulated template due to a non-secure communication channel between stored template and matcher.

One of the approaches to address the problem of non-secure communication channel and template manipulation is to embed biometric features as invisible structure to innocuous cover image. This technique is known as watermarking which prevents an eavesdropper from accessing sensitive template information and reduces manipulation rate.

A number of watermarking techniques have been proposed to secure information in an image. These can be mainly classified as spatial domain techniques and transformed domain techniques. Recent watermarking techniques are used in conjunction with biometric [717] to enhance the security of biometric. Ratha et al. [12] proposed a blind data hiding method, which is applicable to fingerprint images compressed with WSQ (Wavelet-packet Scalar Quantization) standard. The watermark message is assumed to be very small compared to the fingerprint image. The quantizer integer indices are randomly selected and each watermark bit replaces the LSB of the selected coefficient. At the decoder, the LSB’s of these coefficients are collected in the same random order to construct the watermark. Jain et al. [13] used the facial information as watermark to authenticate the fingerprint image. A bit stream of eigen face coefficients are embedded into selected fingerprint image pixels using a randomly generated secret key. The embedding process is in spatial domain and does not require the original image for extracting the watermark. Noore et al. [14] proposed multiple watermarking algorithm, in texture regions of fingerprint image using Discrete Wavelet Transform (DWT). They used face and text information as watermark. Their approach is resilient to common attacks such as compression, filtering and noise. Komninos and Dimitriou [15] combined lattice and block-wise image watermarking technique to maintain image quality along with cryptographic technique to embed fingerprint templates into facial images. Al-Assam et al. [16] proposed a lightweight approach for securing biometric template, based on a simple efficient and stable procedure to generate random projections which meets the revocability property. Nagar et al. [17] proposed bio-hashing and cancelable fingerprint template transformation techniques based on six metrics to protect biometric trait, facilitates the security evaluation and vulnerable to linkage attacks.

The problems of biometric template security raise concerns with the wide spread explosion and deployment of biometric systems both commercially and in government applications. So by keeping security and secrecy issues in concern for the template security enhancement, in this paper, we present a novel biometric watermarking algorithm to support the capacity demanded by the multimodal templates. Section 2 describes the approach of biometric feature extraction and matching algorithms in brief. Sections 3 and 4 explains the proposed watermarking technique and fusion model respectively. The results obtained are illustrated in Section 5. We verify the matching ability of different biometrics without watermarking and with watermarking technique and study the resilience to various attacks during transmission and processing of host signal.

2. Biometric Feature Extraction and Matching Approach

Fingerprints and iris are selected as biometric as they are easily acquired, socially accepted and more or less invariant to individual aspects like culture, sex, education level, orientation, and so forth. This section briefly explains fingerprint minutiae (features) extraction, iris feature extraction, and matching technique.

2.1. Fingerprint Feature Extraction and Matching

To employ fingerprint minutiae extraction step, sensed print undergoes few necessary steps. In this work the raw finger print image has been routed through steps like (a) pre-processing: to extract fingerprint area, to remove the boundary, morphological opening operation requires to remove peaks introduced by background noise and closing operation to eliminate small cavities generated by improper pressure of fingerprint, (b) thinning: required to remove erroneous pixels which destroy the integrity of spurious bridges and spurs, exchange the type of minutiae points and miss detect true bifurcations, (c) false minutiae removal: required to remove false ridge breaks and ridge cross-connections which are generated due to insufficient amount of ink and over inking respectively.

After extracting minutia points special feature vector is generated corresponding to single minutia point which is rotation invariant. Feature vector is generated by defining surface geometry consisting of radial grids , with origin at the minutia point and grid separation angle as shown in Figure 1. Grid is oriented along the orientation of th minutia . Grid nodes (points on grid) are marked along each grid at an interval of starting with the minutia point as the origin. Larger the value of and smaller the value of makes the size of feature vector large. This will give better accuracy at the cost of increased computational complexity. By defining the orientation of grid nodes as, , we calculate the relative orientation between minutia and node ridges as which is free from the rotation and translation of the fingerprint. represents the orientation of the ridge, that passes through the th node, of th grid, and for the th minutia. If a node falls at furrows, then is assigned as 0. The final feature vector of a minutia that describes local structural characteristic, is then given as where gives number of grid nodes along the direction metric corresponding to th minutiae.

Considering three grids and five nodes per each grid, specified feature vector will be of size for each minutiae point. These feature vectors are converted into binary stream denoted as . Each relative orientation is represented with four bit, one for sign and three for orientation. Individual minutiae data sets contained between 20 to 30 minutiae points, with an average of 25 minutiae points. Thus the size of is bit for single fingerprint template.

A distortion-tolerant matching algorithm [18] is used here that defines a novel feature vector for each fingerprint minutia based on the global orientation field. These features are used to identify corresponding minutiae between two fingerprint impressions by computing the similarity between feature vectors that gives high verification accuracy. Suppose and are the structure feature vectors of minutia from input fingerprint and minutia from retrieved features of fingerprint respectively, then a similarity level is defined as where is the Euclidean distance between feature vectors and and is the predefined threshold. Here, the selection of the value of is trade-off between False Acceptance Rate (FAR) and False Rejection Rate (FRR), high value of increases FAR and opposite is true for FRR. Here, the similarity level describes a matching assurance level of a structure pair.

2.2. Iris Feature Extraction and Matching

The general iris recognition system consists of four important steps: iris segmentation which extracts iris portion from the localized eye image, iris normalization which converts the iris portion into rectangular strip of fixed dimensions to compensate for the deformation of pupil due to change in environmental conditions, iris feature extraction deals with extraction of core iris features from the iris texture patterns and generate bitwise biometric template, and iris template matching compares the stored template with the query template and gives the decision of authentication of a person based on some predefined threshold [19]. Among these steps the iris segmentation plays very important role in the whole system as it has to deal with eyelids and eyelashes occlusions, specular highlights. If iris portion is not properly segmented, then it may lead to poor recognition rates.

Iris segmentation is done using pupil circle region growing technique which uses binary integrated edge intensity curve approach to avoid eyelids and eyelashes. After locating the iris inner and outer boundaries, which contains eyelids and eyelashes, we grow the circle of the pupil gradually and generate its edge image using Sobel horizontal edge detector. As eyelids are horizontally aligned, horizontal biased Sobel operator gives prominent horizontal eyelid edges. This approach is specially used to detect the upper and lower eyelid regions and to restrict the Region of Interest (ROI). When the computed horizontal edge intensity curve is below threshold value , it indicates that the eyelids portion has not started as shown in Figure 2. The radius is required to be grown until it covers eyelids. When the horizontal edge intensity curve exceeds threshold value , then it indicates that either upper or lower eyelid region has started appearing in the ROI. Here the growth of pupil circle is stopped. Thus the pupil circle is grown gradually to achieve a new outer iris boundary such that the area between the pupil boundary and new outer boundary does not contain eyelids or eyelashes.

The partial iris region between iris inner and restricted outer boundary is converted into rectangular strip of fixed dimensions 60 × 450 by Daugman’s Rubber Sheet model [20]. Core feature of rectangular strip are extracted as suggested in [21]. The size of iris core feature is 348 bits which is used as watermark. The matching between input iris feature and retrieved iris feature is done by standard Hamming Distance (HD).

3. Proposed Watermarking Approach

We have proposed a Discrete Cosine Transform (DCT) based blind watermarking technique. In the proposed approach original image X of arbitrary size is divided into non-overlapping 8 × 8 blocks. Let be a pixel values from the block, where , . Each block is transformed into a two dimensional DCT block and categorized into smoother block, texture block, and edge block by measuring local block variance and local block projection of gradient. The key issues in watermarking are capacity, robustness and invisibility. These requirements are mutually viable and cannot be optimized simultaneously. For situation demanding immense amount of bit embedding, tradeoff between invisibility and robustness is necessary therefore reasonable compromise is always inevitability [22].

For biometric watermarking robustness is very important as biometric information (fingerprint feature vector and iris feature) is embedded where even a change in one bit can decrease the authenticity. Most of the signal energy of the block DCT is dense in the DC component and the remaining energy always has a spreading diminishingly in the AC components in zigzag scan order as shown in Figure 3. In that block with black shade represents DC component of 8 × 8 DCT block while blocks with gray and white shade indicate low frequency and high frequency AC component respectively.

Hiding of watermark bit in DC co-efficient gives more robustness but perception of watermark is then a major issue. Vice versa is true for high frequency AC coefficients. As a tradeoff, proposed technique embeds watermark in low frequency AC coefficients of the selected 8 × 8 DCT blocks. Embedding of watermark bit is done by modulating low frequency AC coefficients of 8 × 8 DCT block based on their estimated values. Estimated value of an AC coefficient is computed using the DC coefficients from eight neighboring DCT blocks as shown in Figure 4. In which DC, are DC coefficients of neighborhood 8 × 8 blocks.

By considering such 3 × 3 overlapping neighborhood DCT blocks, , where coefficients of center DCT block are estimated by using The notions behind the selection of DC, coefficients to estimate particular coefficient in (4) are as follows.(1)Horizontal variations in each 8 × 8 DCT block are characterized by AC components AC1 and AC5. Hence, DC values of horizontal neighborhood blocks (DC4, DC5 and DC6) are considered in the objective function for estimating AC1 and AC5.(2)Vertical variations in each 8 × 8 DCT block are characterized by AC components AC2 and AC3. Hence, DC values of vertical neighborhood blocks (DC2, DC5 and DC8) are considered for estimating AC components AC2 and AC3.(3)AC4 represents the diagonal variations. Hence, DC1, DC3, DC7 and DC9 are considered for estimating AC components AC4.Linear Programming based optimization technique [23] is considered to calculate optimal weights to based on image content. In this method known coefficients of benchmark images are used. All weights calculated for a particular coefficient are stored in different matrices and histogram for matrix elements is computed. The histogram of this matrix with weights as elements is a discrete function where is the th weight and is the number of weights in the matrix having th value. From this set of weights, a weight whose frequency of occurrence obtained maximum is taken and accordingly multiplied with corresponding coefficient for the AC coefficient estimation. The edge blocks are neither considered for estimation nor for watermark embedding because it leads to artifact in resultant watermarked image. Figure 5(a) shows artifact when considering edge blocks along with smooth blocks for embedding watermark.

The proposed method considers the local block features like variance and projection of gradient to identify the edge block in order to remove them from bit embedding processing. Statistical parameter variance is very sensitive to uncertainties so that it is used as a decisive parameter to find the smoother and edge block but it cannot discriminate between texture and edge blocks. However maxima of 1st order difference of projection of gradient image can differentiate the edge block from texture block. Uniform distribution of edges in the texture block will keep the difference at low value and for random presence of edge either or both vertical or horizontal projection difference will have significant values. If local variance is less than predefined threshold then block is marked as smoother block. If local maxima, of differential projection is significantly larger than that of global image then the block is marked as edge block.

In this approach low frequency co-efficient of each smoother and texture blocks are selected for hiding watermark (fingerprint and iris features). Iris feature and fingerprint features are sequentially embedded by modifying the amplitude of transform domain coefficients of selected DCT block. Modification is done based on comparison between original value and its estimated value as in (6). Where, is a positive fraction which controls tradeoff between robustness and perceptibility. “” in (6) represents watermark logo vector obtained from cascading of fingerprint feature “” and iris feature “”:

Decoding of watermark bit requires estimated value of coefficient and original value to extract watermark bit. If then extracted bit is “1”, otherwise extracted bit is “0”.

4. Fusion Model

During the verification process feature vector (live template) , where is fingerprint feature vector and is iris feature vector, is compared with extracted feature vectors. But extracted fingerprint features are in the form of binary numbers , so it has to be first converted into numeric form . Let the corresponding extracted iris feature be . If similarity score for fingerprint system and Euclidian distance for iris system is satisfied than user is verified as genuine. The threshold values and are determined during the system validation process. Empirically it has been found that user is registered in database only if both systems have accepted the user.

For the security system generally low FAR is preferred. Fingerprint and Iris based system provides considerably low FAR. The fusion in our system is performed at decision level to further reduce the FAR. Simple Conjunction (“AND”) rule can be used to combine the two systems (fingerprint) and (iris), that means a False Accept can only occur if both system and produce a False Accept. Let and is probability of False Accept using fingerprint and iris respectively and and is probability of False Reject using fingerprint and iris respectively. Thus the combined probability of a False Accept is the product of its two probabilities for the individual systems: But combined probability of a False Reject can be expressed as the complement of the probability that neither system nor produce a False Reject, which is higher than it is for either system alone: Equations (7) and (8) state that joint probability of false acceptance decreases (satisfies aim of security system) and joint probability of false rejection increases with simple conjunction rule. To improve FRR, proposed fusion technique aims to modify a decision threshold of weaker (fingerprint) system. This can be achieved by limiting the threshold of the fingerprint (weaker) system to a maximum value, obtained by projecting 50% of the cross-over error rate (point at which both error rate are equal) on to the FRR curve of the stronger (iris) system. This is achieved at the cost of degradation in combined FAR. Figure 6(b) is the magnified version of Figure 6(a). It shows the performance of individual as well as combined model in which point , , , and indicate cross-over point of fingerprint system, iris system, after combining both systems with simple conjunction rule and with modified approach, respectively. Simple conjunction rule improves FAR but at the same time increases FRR than the individual systems. Cross-over point of fingerprint systems, iris system, with simple conjunction rule and with modified approach are 6.2%, 3.2%, 5.5%, and 1.2%, respectively.

FRR and FAR of modified approach are better for threshold range tagged by line segment - than the individual systems.

5. Experimental Evaluation

An ideal template protection scheme should not degrade the recognition performance (FAR and FRR) of the biometric system. This section extends the experimental results of DCT watermarking by computing the verification performance of fingerprint, iris, and multimodal biometrics for different attacks on the watermarked cover image. This experiment is performed to verify the integrity and robustness of the proposed biometric watermarking algorithm. Since the proposed watermarking algorithm uses fingerprint and iris, we use a decision level biometrics fusion algorithm. The multimodal biometric verification performance is computed using proposed conjunction rule based fusion algorithm. In order to explore the performance of the proposed watermarking algorithm, number of experiment are performed on different images of size 512 × 512, namely Texture, Cameraman, India logo and Bank logo (shown in Figure 7(a)).

To calculate the optimal weights, all objective functions in (4) are simplified by using above four images based on image content and repeated weights are selected to estimate the values. Table 1 shows the weights derived from the experiment.

In order to check the performance of fingerprint and iris system, DB3 database in FVC2004 [24] and CASIA database version-1 [25] is used, respectively. DB3 database comprises of 800 fingerprint images of size 300 × 480 pixels captured at a resolution of 512 dpi, from 100 fingers (eight impressions per finger). Individual minutiae data sets contained between 20 to 30 minutiae points, with an average of 25 minutiae points. CASIA database version-1 contains 756 gray scale eye images of 108 users with resolution of 320 × 280. Each user has 7 images captured in two sessions. Each image is represented by 348 bit after feature extraction. We have chosen 100 users from CASIA database and randomly correlate it with fingerprint data base to check improvement due to fusion approach.

It is found that a standalone fingerprint and iris system gives equal error rate (EER) at threshold values and respectively, as shown in Figure 6(b). In order to take advantage of fusion model, threshold point of fingerprint system is shifted between threshold ranges 0 to 0.32, indicated by line segment - in Figure 6(b). After combining both systems with and for fingerprint and iris system respectively, EER obtained is 1.2% (point in Figure 6(b)).

The main advantage of biometric watermarking is that the fingerprint and iris image of the individual need not be stored in separate databases. Digital watermarking allows all related data to be stored and retrieved at the same time. The retrieval of the fingerprint and the iris feature helps in verification of an individual.

It is well known that blind watermarking extraction is more difficult than the watermark recovery with the aid of a reference image. Hence results should only compare within same group. We tried to compare the robustness of our proposed biometric watermarking method with the method suggested in [26, 27]. The algorithms have been re-implemented, closely following the description in Section 3 of this paper (five bits are embedded in each 8 × 8 block).

Pay load capacity is one of the comparative parameter. Pay load capacity of proposed approach for the size of 512 × 512 images is shown in Table 2.

Imperceptibility of watermark is measure by calculating Peak Signal to Noise Ratio (PSNR) value as in Here, and are the pixel values in original host image and watermarked image, respectively, and is the size of an image. The PSNR value of proposed method is observed higher than the method in [26, 27] as stated in Table 3. In [26] watermark is embedded in low frequency coefficients of center block of all 3 × 3 non overlapping neighborhood blocks without discarding edge block in image. While in [27] watermark is embedded into DC coefficient, which decides the block average. So, even a small variation in DC coefficients only effects intensity of all the pixels within the block and hence results in low PSNR value.

The electronic transmission of cover image over the communication channel introduces degradations in the image data. For example, images are compressed when transmitting large image files over low bandwidth channel; a median filter is used to smooth the image; and during transmission some noise is introduced. These effects on the watermarked image are studied by using various image processing attacks such as JPEG compression, median filtering with 3 × 3 filter mask, and the addition of Gaussian noise. To check the robustness against Image compression, the watermarked image is tested with JPEG compression attack with different quality factors and results are as shown in Table 3. In proposed algorithm, watermark extraction Bit Error Rate (BER) is calculated as It can be clearly seen that larger brings robustness, and higher extraction accuracy of bits can guarantee recognition performance to a greater extent. Table 4 shows the extraction error rate for median filtering attacked watermarked image with mask size of 3 × 3.

Table 5 shows results for Gaussian filtering attacks. In all cases our method appears better than the methods in [26, 27]. Proposed technique is also robust against various signal processing operations like enhancement (gamma = 0.7) and rescaling (512-256-512). For both mentioned operation results are shown in Table 6.

Receiver Operating Characteristic (ROC) curve of standalone systems are shown in Figures 8 and 9, in which Equal Error Rate (EER) obtained for fingerprint system is 6.2% and for iris system 3.2%. It is clearly seen from ROC curve that FAR and FRR of the systems without watermarking is almost same as that with watermarking.

Table 6 shows the EER of fingerprint and iris system for different clauses. Results illustrate that proposed watermarking algorithm is robust against various template manipulation causes (intentional, unintentional). Small variation in retrieved template is survived by strong matching algorithm.

6. Conclusion

Challenge of the public system like e-Voting, e-Passport, and e-commerce endorsed with biometrically authentication includes massive number of users, demands high discrimination ability and secured transmission under varying channel conditions. In order to achieve first demand we propose multi-biometric system with fusion of decision using conjunction rule. Overall FRR of the combined model is improved by conditionally limiting the threshold of the fingerprint system to a maximum value, obtained by projecting 50% of the cross-over error rate on to the FRR curve of the iris system. This is achieved at the cost of degradation in combined FAR. Furthermore, to achieve secured transmission, biometric feature embedded inside any host image by blind watermarking algorithm is essential. We proposed spatial and spectral feature based block discrimination and coefficient estimation approach. The payload capacity we obtained is far better than state of art algorithms and robust against various signal processing and channel attacks.