Computational Methods for Engineering ScienceView this Special Issue
Research Article | Open Access
Jing Luo, Dan Song, Chunbo Xiu, Shuze Geng, Tingting Dong, "Fingerprint Classification Combining Curvelet Transform and Gray-Level Cooccurrence Matrix", Mathematical Problems in Engineering, vol. 2014, Article ID 592928, 15 pages, 2014. https://doi.org/10.1155/2014/592928
Fingerprint Classification Combining Curvelet Transform and Gray-Level Cooccurrence Matrix
Fingerprint classification is an important indexing scheme to reduce fingerprint matching time for a large database for efficient large-scale identification. The abilities of Curvelet transform capturing directional edges of fingerprint images make the fingerprint suitable to be classified for higher classification accuracy. This paper presents an efficient algorithm for fingerprint classification combining Curvelet transform (CT) and gray-level cooccurrence matrix (GLCM). Firstly, we use fast discrete Curvelet transform warping (FDCT_WARPING) to decompose the original image into five scales Curvelet coefficients and construct the Curvelet filter by Curvelet coefficients relationship at adjacent scales to remove the noise from signals. Secondly, we compute the GLCMs of Curvelet coefficients at the coarsest scale and calculate 16 texture features based on 4 GLCMs. Thirdly, we construct 49 direction features of Curvelet coefficients at the other four scales. Finally, fingerprint classification is accomplished by -nearest neighbor classifiers. Extensive experiments were performed on 4000 images in the NIST-4 database. The proposed algorithm achieves the classification accuracy of 94.6 percent for the five-class classification problem and 96.8 percent for the four-class classification problem with 1.8 percent rejection, respectively. The experimental results verify that proposed algorithm has higher recognition rate than that of wavelet-based techniques.
As a type of human biometrics, fingerprint has been widely used for personal recognition in forensic and civilian applications because of its uniqueness, immutability, and low cost. An automatic recognition of people based on fingerprints requires matching of an input fingerprint with a large number of fingerprints in a database. However, the database can be huge (e.g., the FBI database contains more than 70 million fingerprints), such a task can be very expensive in terms of computational burden and time. In order to reduce the search time and computational burden, fingerprints in the database are classified into several prespecified types or subclasses. When an input fingerprint is received, a coarse level matching is applied to determine which subclass the input belongs to, and then at a finer level, it is compared to samples within the subset of the database for recognition. While such a scheme is obviously more efficient, the first step, that is, fingerprint classification, must be accurate and reliable and hence has attracted extensive research in recent years [1–13].
Fingerprints are classified based on their shapes, and in literature it is common to have five classes as shown in Figure 1 (x):(A) (x = a, b, c, d, e), including whorl (W), right loop (R), left loop (L), arch (A), and tent arch (T). Although these five classes appear very different to us as a person, automatically classifying a fingerprint by a machine is in fact a very challenging pattern recognition problem, due to the small interclass variability, the large intraclass variability and the difficulty for poor quality fingerprints. Fingerprint classification is carried out by analysis and comparison of the features. Over the past decade various approaches have been proposed by means of different types of features, such as singularities [1–3], orientation field [4–6], and statistical and frequency features [7–10]. The methods based on singularities [1–3] accomplish fingerprint classification according to the number and relative position of the core and delta points. The approaches using orientation field [4–6] partition the fingerprint orientation field into “homogeneous” orientation regions and the relational graphs of these regions are used to make fingerprint classification. Gabor filter  can also be used to extract the fingerprint features for classification where the input image is decomposed into four component images by four Gabor filters, and the standard deviation of the component images in each sector to generate the feature vector, and uses a -nearest neighbor classifier to make fingerprint classification.
In 2001, Tico et al.  proposed an approach to use wavelet transform as features for fingerprint recognition. In , a wavelet decomposition on 6 octaves of each fingerprint image was performed, and the normalized -norm of each wavelet subimage is computed to extract a feature vector of length 18. The experimental database contains 168 fingerprint images collected from 21 fingers (8 images per finger). The algorithm achieves the accuracy of 100% when wavelet basis Symmlet 6 or Symmlet 9 was employed. The work in  shows that the wavelet features are suitable for matching complex patterns of oriented texture such as fingerprints. However, wavelets are characterized by isotropic scaling (e.g., the standard orthogonal wavelet transform contains wavelets in the directions of primary vertical, primary horizontal, and primary diagonal only) and hence their abilities to resolve directional features are limited. So wavelets are not able to detect curved singularities effectively.
Inspired by the success of wavelets, a number of other multiresolution analysis tools have also been proposed with the aim to present better the edges and other singularities along curves. These tools include contourlet, ridgelet, and Curvelet. In recent years, researchers have used Curvelets for fingerprint image enhancement [11, 12] and for fingerprint recognition . Comparing to the limited directional features of wavelets, Curvelets are more powerful as they are able to describe a signal by a group of matrices at multiscale and multidirection. Furthermore, with the increase of the scale, the number of directions is much finer.
In 2008, Mandal and Wu  proposed to use Curvelet transform for fingerprint recognition and achieved the accuracy of 100%. However, the performance of the proposed algorithm was only tested on a small database of 120 fingerprint images (containing only 15 individuals). Furthermore, in order to ensure the accuracy, the technique requires manual detection of the core of the fingerprint image. Also, before extracting the Curvelet features, the technique needs complex image enhancement process which includes estimation of local ridge-orientation, estimation of local ridge-frequency across the fingerprint, filtering of the image, and binarization (conversion of a gray-scale fingerprint image to binary image).
In this paper, we present a novel fingerprint classification algorithm. Firstly, we use fast discrete Curvelet transform warping (FDCT_WARPING) to decompose the original image into five-scale Curvelet coefficients and construct the Curvelet filter by Curvelet coefficients relationship at adjacent scales to smooth the discontinuities of ridges and remove the noise in the fingerprint image. Secondly, we calculate four gray-level cooccurrence matrices (GLCMS) of Curvelet coefficients at the coarsest scale and calculate 16 texture features based on 4 GLCMS. Furthermore, we construct 49 direction features of Curvelet coefficients at the other four scales. Finally, these combined Curvelet-GLCM based features act as the feature set to a -nearest neighbor classifier.
In the following sections, we will present the details of our fingerprint classification algorithm. Section 2 presents our noise filtration scheme and our feature extraction scheme. Section 3 presents some experimental results achieved on NIST-4 databases. Finally Section 4 draws the conclusions and outlines the open problems.
2. Fingerprint Classification
2.1. Fingerprint Alignments
Considering the translation and rotation between template images and probe images, this paper adopted the algorithm in  to accomplish the fingerprint image registration. The algorithm used the reference points of the central area to calculate the parameters of translation and rotation, which is much more general when there are no cores in the fingerprint images.
2.2. Fast Discrete Curvelet Transform (FDCT)
Curvelets were proposed by Candese and Donoho , constituting a family of frames that are designed to represent the edges and other singularities along curves. Conceptually, the Curvelet transform is a multiscale pyramid with many orientations and positions at each length scale and needle-shaped elements at fine scale. This pyramid is nonstandard, however. Indeed, Curvelets have useful geometric features that set them apart from wavelets and the likes. For instance, Curvelets present highly anisotropic behavior as it has both variable length and width. At fine scale, anisotropy increases with decreasing scale, in keeping with power law.
In 2006, Candès et al. proposed two fast discrete Curvelet transforms (FDCT) . The first one is based on unequally spaced fast Fourier transforms (USFFT) , and the other is based on the wrapping of specially selected Fourier samples (FDCT_WARPING) . Curvelets by warping have been used for this work, because this is the fastest Curvelet transform currently available .
After Curvelet transform, several groups of Curvelet coefficients are generated at different scales and angles. Curvelet coefficients at scale and angle are represented by a matrix , and scale is from finest to coarsest scale, and angle starts at the top-left corner and increases clockwise.
Suppose that , , denotes original image and denotes 2D discrete Fourier transform; , is the size of original image.
The implementation of FDCT_WARPING is as follows.
Step 1. 2D FFT (fast Fourier transform) is applied on to obtain Fourier samples .
Step 2. Resample at each pair of scale and direction , in frequency domain, yielding the new sampling function:
where and and are two initial positions of the window function .
and are relevant parameters of and , respectively, and they are length and width components of window function support interval.
Step 3. Multiplication of the new sampling function with window function , and the result is  where
Step 4. Apply the inverse 2D FFT to each , hence collecting the discrete coefficients .
2.3. Fingerprint Image FDCT and Noise Filtration Technique
Curvelet transform describes a signal by the power at each scale, angle, and position. Curvelets have variable length in addition to variable width. Curvelet transform has improved directional capability and better ability to represent edges and other singularities along curves compared to other multiscale transforms, for example, wavelet transform. Image like fingerprints often have two nearby regions (ridges and valleys) that differ in pixel values. These variations in pixel values between two consecutive regions are likely to form such “edges” and this edge information is eventually captured by digital Curvelet transform.
2.3.1. Acquisition of Curvelet Coefficients
In this paper, fingerprint images are from NIST-4 databases. Firstly, the size of all the images is normalized to 256 × 256 pixels. The proposed algorithm is implemented by using MATLAB 7.5 programming language and is executed under Windows XP Professional O.S. on a PC with AMD Dual-Core CPU E-350 at 1.28 GHz. The original image is decomposed by FDCT_WARPING .
Suppose that , , denotes original fingerprint image and denotes 2D discrete Fourier transform; , is the size of original fingerprint image.
The number of scales can be calculated by where returns the minimum of and and rounds the input to the nearest integers greater than or equal to input.
The number of orientation at scale , is calculated by where , ; , ; , and .
Noted that if FDCT_WARPING chooses the Curvelets for the coefficients at the finest level , the number of orientation at scale can be determined by (2). On the other hand, when choosing the wavelets for the coefficients at the finest level , there is only one angle at the finest level . In this paper, we adopt the wavelets for the coefficients at the finest level . As a result, there is only one angle at the finest level .
For scale , all the Curvelet coefficients are divided into the four quadrants. The quadrant label of Curvelet coefficients is denoted by , and four quadrants are denoted by , respectively. At each quadrant, Curvelet coefficients are further subdivided in angular panels. The number of angular panels at each quadrant of each scale , denoted by , can be calculated as follows: where , ; , ; , and .
In this paper, according to (4), we can get the number of scales . According to (5), the number of angle at scale 4 is 32, at scale 3 is 32, and at scale 2 is 16. At scale 4, the number of angular panels of each quadrant is 8. At scale 3, the number of angular panels of each quadrant is 8. At scale 2, the number of angular panels of each quadrant is 4. After decomposition, the original image was divided into three levels: coarse, detail, and fine. The low-frequency coefficients were assigned to coarse. The high-frequency coefficients were assigned to fine. The middle-frequency coefficients were assigned to detail. According to FDCT_WARPING , the scale is from finest to coarsest scale and angle starts at the top-left corner and increases clockwise.
The acquisition of Curvelet coefficients is as follows.
Step 1. 2D FFT (fast Fourier transform) is applied on to obtain Fourier samples .
Step 2. Acquire the Curvelet coefficients at scale 5, denoted by matrix .
Suppose that In this paper, rounds the input to the nearest integers less than or equal to the input. (1)Construct the right and left windows along the horizontal direction, denoted by row vector and , respectively. Consider (2)Construct the right and left windows along the vertical direction, denoted by row vector and , respectively. Consider , , , and are normalized by (3)Construct the two sub-low-pass filters, denoted by row vector and , respectively. Consider where , . (4)Construct a low-pass filter at scale 5, denoted by matrix , and where is the transpose vector or matrix of the input vector or matrix. (5)Construct a high-pass filter at scale 5, denoted by matrix , which has the same size as . Consider (6) is filtered by , hence generating the filtered high-pass signal at scale 5 , which has the same size of as . Consider where the filter at scale 5 has the following range: (7)Inverse 2DFFT (inverse fast Fourier transform) is applied to , hence generating the discrete Curvelet coefficients at scale 5, . (8) is filtered by , hence generating the filtered low-pass signal at scale 5, and
Step 3. Acquire the Curvelet coefficients at scale 4 and angle 1 to angle 32, , .
Firstly, we acquire the Curvelet coefficients at scale 4 and angle 1, .
The filter at scale 4 has the following range: (1)Construct a low-pass filter at scale 4 and angle 1 in the same way as at scale 5, . (2)Construct a high-pass filter at scale 4 and angle 1 in the same way as at scale 5, . (3) is filtered by , hence generating the filtered low-pass signal at scale 4 . (4) is filtered by , hence generating the filtered high-pass signal at scale 4 , which has the same size as that of . (5)Determine the discrete locating window of wedge wave at scale 4 and angle 1.
The Curvelet coefficients at scale 4 are divided into the four quadrants. The quadrant label of Curvelet coefficients is denoted by , and four quadrants are denoted by , respectively. Each quadrant has 8 angles. In the first quadrant, , angle ranges from 1 to 8, in the second quadrants, , angle ranges from 9 to 16, in the third quadrants, , angle ranges from 17 to 24, in the fourth quadrants, and , angle ranges from 25 to 32.
Suppose that where denotes the number of angle at each quadrant at scale 4; in this paper, .
The left vector of wedge wave is denoted by , where , .
The right vector of wedge wave is denoted by , where , .
The combination wedge wave vector is denoted by .
The endpoint vector of wedge wave is denoted by , where .
The first midpoint vector of wedge wave is denoted by , where .
The second midpoint vector of wedge wave is denoted by , where .
The combination midpoint vector of wedge wave is denoted by , where .
The first wedge wave endpoint along the vertical orientation is denoted by
The length of the first wedge wave is denoted by
The width of the wedge wave is denoted by .
The slope of the first wedge wave is .
The left line vector is denoted by , where .
The first row coordinate is .
The first column coordinate is .
Condition column vector is denoted by , where
Thus, the discrete locating window of wedge wave is denoted by , where where (6)The discrete locating window of wedge wave is filtered and rotated, hence generating matrix .
The slope of right wedge wave is .
The middle line matrix , where .
The right coordinate matrix is , where
The corner coordinate matrix , where
The is wrapped, yielding the matrix and :
In the same way, the is wrapped, yielding the matrix and .
The discrete locating window of wedge wave is filtered, yielding the matrix , where .
The matrix is rotated, yielding the matrix where rotates matrix counterclockwise by degrees. (7)Inverse 2DFFT is applied to , hence generating the Curvelet coefficient at scale 4 and angle 1, . (8)Repeat , , and in Step 3, in the same way of acquiring ; Curvelet coefficients at scale 4 and angle from 2 to 8 are generated.
Noted that at angle from 2 to 32, the left line vector is denoted by , where . Consider (9)The Curvelet coefficients at scale 4 and other three quadrants, , are acquired in the same way as that in the first quadrants.
Finally, the Curvelet coefficients at scale 4, , , are generated after Step 3.
Step 4. Repeat Step 3, hence generating the Curvelet coefficients at scale 3, , .
Note that the discrete locating window of wedge wave at scale 3 can be calculated by where , is the length and width of the discrete locating window of wedge wave at scale 3, and is condition vector at scale 3.
Step 5. Repeat Step 3, hence generating the Curvelet coefficients at scale 2, , .
Note that the discrete locating window of wedge wave at scale 2 can be calculated by where and is condition vector at scale 2.
Step 6. Inverse 2DFFT is applied to the low-pass signal at scale 2, , generating the Curvelet coefficient at scale 1 .
The detailed structure of the Curvelet coefficients obtained by FDCT_WARPING is shown in Table 1.
2.3.2. Fingerprint Image Noise Filtration Technique
Noise always arises from the acquiring fingerprint images. The noise may result in the vagueness and many discontinuities of ridges (or valleys) in the image and thus affects accurate feature extraction and recognition. So, it is necessary and important to denoise in fingerprint images.
The relationship of the Curvelet coefficients between the different scales is similar to the relationship of the wavelet coefficients; that is, there exists strong correlation between them.
From Table 1, there are 16 and 32 orientations at scale 2 and scale 3, respectively. Each Curvelet coefficient matrix is at scale 2 and each orientation corresponds to two adjacent matrices generated at scale 3. The ridges in a fingerprint image correspond to the Curvelet coefficients with large magnitude at scale 2. Each Curvelet coefficient matrix at scale 2 is decomposed into two Curvelet coefficient matrices at scale 3 and at two adjacent orientations. The corresponding two Curvelet coefficient matrices at scale 3 also have large magnitude, while the magnitude of the Curvelet coefficients corresponding to the noise dies out swiftly from scale 2 to scale 3. So, we use the direct spatial correlation of Curvelet coefficients at scale 2 and scale 3 to accurately distinguish ridges from noise. For scales 4 and 5, we adopt hard thresh method to filter the noise. Finally, we reconstruct all the Curvelet coefficients by the technique  and accomplish fingerprint image filtration.
The proposed noise filtration technique has the following steps.
Step 1. Noise filtration of Curvelet coefficient matrices at scale 2 and scale 3, respectively.
This section details the major steps of the proposed noise filtration algorithm of Curvelet coefficient matrices generated at scales 2 and 3.
Assume each matrix to be at scale 2 and orientation , , corresponds to two adjacent matrices generated at scale 3, and , ( ranges from 1 to ). Let be the size of . The sizes of and are and , respectively. The matrices at scales 2 and 3 are filtered as follows. (1)Decompose the Curvelet coefficients matrix into two matrices and with the same size as , . The elements of are extracted from row 1 to and column 1 to of , and the elements of are extracted from row to and column to of . Consider where , , , and where , , , , . (2)Decompose the Curvelet coefficients matrix into two matrices and with the size of : where , , , and where , , , , . (3)Calculate the four multiplication coefficient matrices between and the four matrices , , , and , respectively. Consider