#### Abstract

Fingerprint classification is an important indexing scheme to reduce fingerprint matching time for a large database for efficient large-scale identification. The abilities of Curvelet transform capturing directional edges of fingerprint images make the fingerprint suitable to be classified for higher classification accuracy. This paper presents an efficient algorithm for fingerprint classification combining Curvelet transform (CT) and gray-level cooccurrence matrix (GLCM). Firstly, we use fast discrete Curvelet transform warping (FDCT_WARPING) to decompose the original image into five scales Curvelet coefficients and construct the Curvelet filter by Curvelet coefficients relationship at adjacent scales to remove the noise from signals. Secondly, we compute the GLCMs of Curvelet coefficients at the coarsest scale and calculate 16 texture features based on 4 GLCMs. Thirdly, we construct 49 direction features of Curvelet coefficients at the other four scales. Finally, fingerprint classification is accomplished by -nearest neighbor classifiers. Extensive experiments were performed on 4000 images in the NIST-4 database. The proposed algorithm achieves the classification accuracy of 94.6 percent for the five-class classification problem and 96.8 percent for the four-class classification problem with 1.8 percent rejection, respectively. The experimental results verify that proposed algorithm has higher recognition rate than that of wavelet-based techniques.

#### 1. Introduction

As a type of human biometrics, fingerprint has been widely used for personal recognition in forensic and civilian applications because of its uniqueness, immutability, and low cost. An automatic recognition of people based on fingerprints requires matching of an input fingerprint with a large number of fingerprints in a database. However, the database can be huge (e.g., the FBI database contains more than 70 million fingerprints), such a task can be very expensive in terms of computational burden and time. In order to reduce the search time and computational burden, fingerprints in the database are classified into several prespecified types or subclasses. When an input fingerprint is received, a coarse level matching is applied to determine which subclass the input belongs to, and then at a finer level, it is compared to samples within the subset of the database for recognition. While such a scheme is obviously more efficient, the first step, that is, fingerprint classification, must be accurate and reliable and hence has attracted extensive research in recent years [1–13].

Fingerprints are classified based on their shapes, and in literature it is common to have five classes as shown in Figure 1 (x):(A) (x = a, b, c, d, e), including whorl (W), right loop (R), left loop (L), arch (A), and tent arch (T). Although these five classes appear very different to us as a person, automatically classifying a fingerprint by a machine is in fact a very challenging pattern recognition problem, due to the small interclass variability, the large intraclass variability and the difficulty for poor quality fingerprints. Fingerprint classification is carried out by analysis and comparison of the features. Over the past decade various approaches have been proposed by means of different types of features, such as singularities [1–3], orientation field [4–6], and statistical and frequency features [7–10]. The methods based on singularities [1–3] accomplish fingerprint classification according to the number and relative position of the core and delta points. The approaches using orientation field [4–6] partition the fingerprint orientation field into “homogeneous” orientation regions and the relational graphs of these regions are used to make fingerprint classification. Gabor filter [9] can also be used to extract the fingerprint features for classification where the input image is decomposed into four component images by four Gabor filters, and the standard deviation of the component images in each sector to generate the feature vector, and uses a -nearest neighbor classifier to make fingerprint classification.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

In 2001, Tico et al. [10] proposed an approach to use wavelet transform as features for fingerprint recognition. In [10], a wavelet decomposition on 6 octaves of each fingerprint image was performed, and the normalized -norm of each wavelet subimage is computed to extract a feature vector of length 18. The experimental database contains 168 fingerprint images collected from 21 fingers (8 images per finger). The algorithm achieves the accuracy of 100% when wavelet basis Symmlet 6 or Symmlet 9 was employed. The work in [10] shows that the wavelet features are suitable for matching complex patterns of oriented texture such as fingerprints. However, wavelets are characterized by isotropic scaling (e.g., the standard orthogonal wavelet transform contains wavelets in the directions of primary vertical, primary horizontal, and primary diagonal only) and hence their abilities to resolve directional features are limited. So wavelets are not able to detect curved singularities effectively.

Inspired by the success of wavelets, a number of other multiresolution analysis tools have also been proposed with the aim to present better the edges and other singularities along curves. These tools include contourlet, ridgelet, and Curvelet. In recent years, researchers have used Curvelets for fingerprint image enhancement [11, 12] and for fingerprint recognition [13]. Comparing to the limited directional features of wavelets, Curvelets are more powerful as they are able to describe a signal by a group of matrices at multiscale and multidirection. Furthermore, with the increase of the scale, the number of directions is much finer.

In 2008, Mandal and Wu [13] proposed to use Curvelet transform for fingerprint recognition and achieved the accuracy of 100%. However, the performance of the proposed algorithm was only tested on a small database of 120 fingerprint images (containing only 15 individuals). Furthermore, in order to ensure the accuracy, the technique requires manual detection of the core of the fingerprint image. Also, before extracting the Curvelet features, the technique needs complex image enhancement process which includes estimation of local ridge-orientation, estimation of local ridge-frequency across the fingerprint, filtering of the image, and binarization (conversion of a gray-scale fingerprint image to binary image).

In this paper, we present a novel fingerprint classification algorithm. Firstly, we use fast discrete Curvelet transform warping (FDCT_WARPING) to decompose the original image into five-scale Curvelet coefficients and construct the Curvelet filter by Curvelet coefficients relationship at adjacent scales to smooth the discontinuities of ridges and remove the noise in the fingerprint image. Secondly, we calculate four gray-level cooccurrence matrices (GLCMS) of Curvelet coefficients at the coarsest scale and calculate 16 texture features based on 4 GLCMS. Furthermore, we construct 49 direction features of Curvelet coefficients at the other four scales. Finally, these combined Curvelet-GLCM based features act as the feature set to a -nearest neighbor classifier.

In the following sections, we will present the details of our fingerprint classification algorithm. Section 2 presents our noise filtration scheme and our feature extraction scheme. Section 3 presents some experimental results achieved on NIST-4 databases. Finally Section 4 draws the conclusions and outlines the open problems.

#### 2. Fingerprint Classification

##### 2.1. Fingerprint Alignments

Considering the translation and rotation between template images and probe images, this paper adopted the algorithm in [14] to accomplish the fingerprint image registration. The algorithm used the reference points of the central area to calculate the parameters of translation and rotation, which is much more general when there are no cores in the fingerprint images.

##### 2.2. Fast Discrete Curvelet Transform (FDCT)

Curvelets were proposed by Candese and Donoho [15], constituting a family of frames that are designed to represent the edges and other singularities along curves. Conceptually, the Curvelet transform is a multiscale pyramid with many orientations and positions at each length scale and needle-shaped elements at fine scale. This pyramid is nonstandard, however. Indeed, Curvelets have useful geometric features that set them apart from wavelets and the likes. For instance, Curvelets present highly anisotropic behavior as it has both variable* length* and* width*. At fine scale, anisotropy increases with decreasing scale, in keeping with power law.

In 2006, Candès et al. proposed two fast discrete Curvelet transforms (FDCT) [16]. The first one is based on unequally spaced fast Fourier transforms (USFFT) [16], and the other is based on the wrapping of specially selected Fourier samples (FDCT_WARPING) [16]. Curvelets by warping have been used for this work, because this is the fastest Curvelet transform currently available [16].

After Curvelet transform, several groups of Curvelet coefficients are generated at different scales and angles. Curvelet coefficients at scale and angle are represented by a matrix , and scale is from finest to coarsest scale, and angle starts at the top-left corner and increases clockwise.

Suppose that , , denotes original image and denotes 2D discrete Fourier transform; , is the size of original image.

The implementation of FDCT_WARPING is as follows.

*Step 1. *2D FFT (fast Fourier transform) is applied on to obtain Fourier samples .

*Step 2. *Resample at each pair of scale and direction , in frequency domain, yielding the new sampling function:
where and and are two initial positions of the window function .

and are relevant parameters of and , respectively, and they are length and width components of window function support interval.

*Step 3. *Multiplication of the new sampling function with window function , and the result is [16]
where

*Step 4. *Apply the inverse 2D FFT to each , hence collecting the discrete coefficients .

##### 2.3. Fingerprint Image FDCT and Noise Filtration Technique

Curvelet transform describes a signal by the power at each scale, angle, and position. Curvelets have variable length in addition to variable width. Curvelet transform has improved directional capability and better ability to represent edges and other singularities along curves compared to other multiscale transforms, for example, wavelet transform. Image like fingerprints often have two nearby regions (ridges and valleys) that differ in pixel values. These variations in pixel values between two consecutive regions are likely to form such “edges” and this edge information is eventually captured by digital Curvelet transform.

###### 2.3.1. Acquisition of Curvelet Coefficients

In this paper, fingerprint images are from NIST-4 databases. Firstly, the size of all the images is normalized to 256 × 256 pixels. The proposed algorithm is implemented by using MATLAB 7.5 programming language and is executed under Windows XP Professional O.S. on a PC with AMD Dual-Core CPU E-350 at 1.28 GHz. The original image is decomposed by FDCT_WARPING [16].

Suppose that , , denotes original fingerprint image and denotes 2D discrete Fourier transform; , is the size of original fingerprint image.

The number of scales can be calculated by where returns the minimum of and and rounds the input to the nearest integers greater than or equal to input.

The number of orientation at scale , is calculated by where , ; , ; , and .

Noted that if FDCT_WARPING chooses the Curvelets for the coefficients at the finest level , the number of orientation at scale can be determined by (2). On the other hand, when choosing the wavelets for the coefficients at the finest level , there is only one angle at the finest level . In this paper, we adopt the wavelets for the coefficients at the finest level . As a result, there is only one angle at the finest level .

For scale , all the Curvelet coefficients are divided into the four quadrants. The quadrant label of Curvelet coefficients is denoted by , and four quadrants are denoted by , respectively. At each quadrant, Curvelet coefficients are further subdivided in angular panels. The number of angular panels at each quadrant of each scale , denoted by , can be calculated as follows: where , ; , ; , and .

In this paper, according to (4), we can get the number of scales . According to (5), the number of angle at scale 4 is 32, at scale 3 is 32, and at scale 2 is 16. At scale 4, the number of angular panels of each quadrant is 8. At scale 3, the number of angular panels of each quadrant is 8. At scale 2, the number of angular panels of each quadrant is 4. After decomposition, the original image was divided into three levels: coarse, detail, and fine. The low-frequency coefficients were assigned to coarse. The high-frequency coefficients were assigned to fine. The middle-frequency coefficients were assigned to detail. According to FDCT_WARPING [16], the scale is from finest to coarsest scale and angle starts at the top-left corner and increases clockwise.

The acquisition of Curvelet coefficients is as follows.

*Step 1. *2D FFT (fast Fourier transform) is applied on to obtain Fourier samples .

*Step 2. *Acquire the Curvelet coefficients at scale 5, denoted by matrix .

Suppose that In this paper, rounds the input to the nearest integers less than or equal to the input. (1)Construct the right and left windows along the horizontal direction, denoted by row vector and , respectively. Consider (2)Construct the right and left windows along the vertical direction, denoted by row vector and , respectively. Consider , , , and are normalized by (3)Construct the two sub-low-pass filters, denoted by row vector and , respectively. Consider where , . (4)Construct a low-pass filter at scale 5, denoted by matrix , and where is the transpose vector or matrix of the input vector or matrix. (5)Construct a high-pass filter at scale 5, denoted by matrix , which has the same size as . Consider (6) is filtered by , hence generating the filtered high-pass signal at scale 5 , which has the same size of as . Consider where the filter at scale 5 has the following range: (7)Inverse 2DFFT (inverse fast Fourier transform) is applied to , hence generating the discrete Curvelet coefficients at scale 5, . (8) is filtered by , hence generating the filtered low-pass signal at scale 5, and

*Step 3. *Acquire the Curvelet coefficients at scale 4 and angle 1 to angle 32, , .

Firstly, we acquire the Curvelet coefficients at scale 4 and angle 1, .

Suppose that

The filter at scale 4 has the following range: (1)Construct a low-pass filter at scale 4 and angle 1 in the same way as at scale 5, . (2)Construct a high-pass filter at scale 4 and angle 1 in the same way as at scale 5, . (3) is filtered by , hence generating the filtered low-pass signal at scale 4 . (4) is filtered by , hence generating the filtered high-pass signal at scale 4 , which has the same size as that of . (5)Determine the discrete locating window of wedge wave at scale 4 and angle 1.

The Curvelet coefficients at scale 4 are divided into the four quadrants. The quadrant label of Curvelet coefficients is denoted by , and four quadrants are denoted by , respectively. Each quadrant has 8 angles. In the first quadrant, , angle ranges from 1 to 8, in the second quadrants, , angle ranges from 9 to 16, in the third quadrants, , angle ranges from 17 to 24, in the fourth quadrants, and , angle ranges from 25 to 32.

Suppose that where denotes the number of angle at each quadrant at scale 4; in this paper, .

The left vector of wedge wave is denoted by , where , .

The right vector of wedge wave is denoted by , where , .

The combination wedge wave vector is denoted by .

The endpoint vector of wedge wave is denoted by , where .

The first midpoint vector of wedge wave is denoted by , where .

The second midpoint vector of wedge wave is denoted by , where .

The combination midpoint vector of wedge wave is denoted by , where .

The first wedge wave endpoint along the vertical orientation is denoted by

The length of the first wedge wave is denoted by

The width of the wedge wave is denoted by .

The slope of the first wedge wave is .

The left line vector is denoted by , where .

The first row coordinate is .

The first column coordinate is .

Condition column vector is denoted by , where

Thus, the discrete locating window of wedge wave is denoted by , where where (6)The discrete locating window of wedge wave is filtered and rotated, hence generating matrix .

Suppose that where , and where , and where , are calculated by (25) and (26), respectively. Consider where , are calculated by (25) and (26), respectively.

The slope of right wedge wave is .

The middle line matrix , where .

The right coordinate matrix is , where

The corner coordinate matrix , where

The is wrapped, yielding the matrix and :

In the same way, the is wrapped, yielding the matrix and .

The discrete locating window of wedge wave is filtered, yielding the matrix , where .

The matrix is rotated, yielding the matrix where rotates matrix counterclockwise by degrees. (7)Inverse 2DFFT is applied to , hence generating the Curvelet coefficient at scale 4 and angle 1, . (8)Repeat , , and in Step 3, in the same way of acquiring ; Curvelet coefficients at scale 4 and angle from 2 to 8 are generated.

Noted that at angle from 2 to 32, the left line vector is denoted by , where . Consider (9)The Curvelet coefficients at scale 4 and other three quadrants, , are acquired in the same way as that in the first quadrants.

Finally, the Curvelet coefficients at scale 4, , , are generated after Step 3.

*Step 4. *Repeat Step 3, hence generating the Curvelet coefficients at scale 3, , .

Note that the discrete locating window of wedge wave at scale 3 can be calculated by where , is the length and width of the discrete locating window of wedge wave at scale 3, and is condition vector at scale 3.

*Step 5. *Repeat Step 3, hence generating the Curvelet coefficients at scale 2, , .

Note that the discrete locating window of wedge wave at scale 2 can be calculated by where and is condition vector at scale 2.

*Step 6. *Inverse 2DFFT is applied to the low-pass signal at scale 2, , generating the Curvelet coefficient at scale 1 .

The detailed structure of the Curvelet coefficients obtained by FDCT_WARPING is shown in Table 1.

###### 2.3.2. Fingerprint Image Noise Filtration Technique

Noise always arises from the acquiring fingerprint images. The noise may result in the vagueness and many discontinuities of ridges (or valleys) in the image and thus affects accurate feature extraction and recognition. So, it is necessary and important to denoise in fingerprint images.

The relationship of the Curvelet coefficients between the different scales is similar to the relationship of the wavelet coefficients; that is, there exists strong correlation between them.

From Table 1, there are 16 and 32 orientations at scale 2 and scale 3, respectively. Each Curvelet coefficient matrix is at scale 2 and each orientation corresponds to two adjacent matrices generated at scale 3. The ridges in a fingerprint image correspond to the Curvelet coefficients with large magnitude at scale 2. Each Curvelet coefficient matrix at scale 2 is decomposed into two Curvelet coefficient matrices at scale 3 and at two adjacent orientations. The corresponding two Curvelet coefficient matrices at scale 3 also have large magnitude, while the magnitude of the Curvelet coefficients corresponding to the noise dies out swiftly from scale 2 to scale 3. So, we use the direct spatial correlation of Curvelet coefficients at scale 2 and scale 3 to accurately distinguish ridges from noise. For scales 4 and 5, we adopt hard thresh method to filter the noise. Finally, we reconstruct all the Curvelet coefficients by the technique [17] and accomplish fingerprint image filtration.

The proposed noise filtration technique has the following steps.

*Step 1. *Noise filtration of Curvelet coefficient matrices at scale 2 and scale 3, respectively.

This section details the major steps of the proposed noise filtration algorithm of Curvelet coefficient matrices generated at scales 2 and 3.

Assume each matrix to be at scale 2 and orientation , , corresponds to two adjacent matrices generated at scale 3, and , ( ranges from 1 to ). Let be the size of . The sizes of and are and , respectively. The matrices at scales 2 and 3 are filtered as follows. (1)Decompose the Curvelet coefficients matrix into two matrices and with the same size as , . The elements of are extracted from row 1 to and column 1 to of , and the elements of are extracted from row to and column to of . Consider where , , , and where , , , , . (2)Decompose the Curvelet coefficients matrix into two matrices and with the size of : where , , , and where , , , , . (3)Calculate the four multiplication coefficient matrices between and the four matrices , , , and , respectively. Consider (4)Filter the Curvelet coefficient matrices and by where , , , and are the Curvelet coefficient matrices at scale 3 after filtration, the operation returns the complex modulus (magnitude) of the input, and is the threshold. For example, if , we assume that the Curvelet coefficients correspond to the ridges of the image. Otherwise, the Curvelet coefficients correspond to the noise of the image, which are assigned to 0. (5)When filtering the Curvelet coefficient matrix , if any of the filtered Curvelet coefficient matrices (, , ,and ) equal to , is considered as noise and assigned to . Where is a matrix with all elements are zero. (6)repeat to with times (the number of orientation of Curvelet coefficients at scale 2).

After , all the Curvelet coefficients at scale 2 and scale 3 are filtered.

*Step 2. *
Noise Filtration of Curvelet coefficient matrices at scale 4 and scale 5, respectively.

The Curvelet coefficient matrices generated at scale 4 are filtered by where are the filtered Curvelet coefficient matrices, is the complex modulus operation, and is the threshold; in this paper, , and where is the size of matrix .

The Curvelet coefficient matrices generated at scale 5 are filtered in the same way as the Curvelet coefficient matrices generated at scale 4. Note that at scale 5 .

*Step 3. *After the coefficients at scale 2, 3, 4, and 5 are filtered, we reconstruct the coefficients using the technique [16] and accomplish image noise filtration.

in (45) can be acquired by the statistics of difference of correlation coefficients in adjacent directions at the same scale. Finally, is selected by many experiments. Figure 1 shows the noise filtration results of five types of fingerprint image by proposed noise filtration algorithm.

From Figure 1 (x):(B) (x = a, b, c, d, e), we can see many discontinuities of ridges in the original image can be smoothed after filtering and the direction of ridge is well followed, which founds the good basis for accurate feature extraction and recognition. We demonstrate Curvelet coefficients at different scales of five types of filtered images in Figure 2.

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

As Figure 2(a) to Figure 2(e) show, there are strong orientations in the Curvelet coefficients images. The white parts in the images represent partial edges of the ridge of fingerprint image in different orientations. Meanwhile, it means the significant Curvelet coefficients of images. The low-frequency (coarse scale) coefficients are stored at the center of the display. The Cartesian concentric coronae show the coefficients at different scales; the outer coronae correspond to higher frequencies. There are four strips associated with each corona, corresponding to the four cardinal points; these are further subdivided in angular panels. Each panel represents coefficients at a specified scale and along the orientation suggested by the position of the panel.

##### 2.4. Fingerprint Feature Extraction

Haralick et al. [18] first proposed gray-level cooccurrence matrix (GLCM) for texture descriptions in the 1970s. It is still popular until today and widely used in various texture classifications [19–23], because of its good statistic performance. The GLCM is a second order statistics method which describes the spatial interrelationships of the grey tones in an image.

GLCM contains elements that are counts of the number of pixel pairs, which are separated by certain distance and at some angular direction. Typically, the GLCM is calculated in a small window, which scans the whole image. The texture feature will be associated with each pixel.

In our studies, GLCM is computed based on two parameters, which are the distance between the pixel pair and their angular relationship . and are quantized in four directions (0°, 45°, 90°, and 135°). For image , defined square window , brightness levels and , the nonnormalized GLCM are defined by where if the argument is true and , otherwise. The signs and in (10) mean that each pixel pair is counted twice: once forward and once backward in order to make the GLCM diagonally symmetric. For each direction, and are shown in Table 2.

The procedures of feature extraction are as follows.

*Step 1. *Scale the grayscale values in Curvelet transform coefficients into 8 levers and compute the GLCMs of Curvelet coefficients at scale 1 and calculate 16 texture features based on 4 GLCMs: (1)Angular second moment (ASM)
(2)Contrast (CON)
where . (3)Correlation (COR)
where
(4)Entropy (ENT)

*Step 2. *Calculate averaged -norm of Curvelet coefficients in 8 directions at the second scale and acquire 8 texture features according to (51). Consider
where, is the size of the matrix , and .

*Step 3. *Calculate averaged -norm of Curvelet coefficients in 16 directions at the third scale and acquire 16 texture features according to (54).

*Step 4. *Calculate averaged -norm of Curvelet coefficients in 16 directions at the fourth scale and acquire 16 texture features according to (54).

*Step 5. *Calculate averaged -norm of Curvelet coefficients at the fifth scale and acquire 1 texture feature.

Note that in Step 2, Step 3, and Step 4, we only calculate the averaged -norm of Curvelet coefficients in even directions to assure the classification accuracy and reduce the recognition time. So, a feature vector containing 57 components for each image can be extracted.

#### 3. Experiment Results

##### 3.1. Datasets

NIST special fingerprint database 4(NIST-4) is one of the most important benchmarks for fingerprint classification. Most published results on fingerprint classification are based on this database. For comparison with other approaches, we also perform our fingerprint classification algorithm on this database for the five-class fingerprint classification problem. Since fingerprint classes A (arch) and T (tented arch) have a substantial overlap, it is very difficult to separate these two classes. Therefore, we also report our results for the four-class classification problem, where classes A and T have been merged into one class. NIST-4 contains 4000 fingerprints of size 480 × 512 pixels, taken from 2000 fingers. Each finger has two impressions (f and s). The first fingerprint instances are numbered from f0001 to f2000 and the second fingerprint instances are numbered from s0001 to s2000. All fingerprints in this database are used in our experiment. We form our training set with the first 2,000 fingerprints from 1,000 fingers (f0001 to f1000 and s0001 to s1000) and the test set contains the remaining 2,000 fingerprints (f1001 to f2000 and s1001 to s2000).

To eliminate the large difference between the feature vectors, each feature vector is normalized according to where represents the th element of vector , denotes the minimum of the th element of all the row vectors, and denotes the maximum of the th element of all the row vectors.

##### 3.2. Experiment Results and Analysis

The performance of a fingerprint classification algorithm is often measured in terms of accuracy. The accuracy is computed as the ratio between the number of correctly classified fingerprint and the total number of fingerprints in the test set. Each image is labeled with one or more of the five classes (W, R, L, A, and T). To simplify the training procedure, we make use of only the first label of a fingerprint to train our system. For testing, however, we make use of all the labels for a fingerprint and consider the output of our classifier to be correct if the output matches any one of the labels. This is in line with the common practice used by other researchers in comparing the classification results on the NIST-4 database.

Classification accuracy does not always increase with increasing of the -nearest neighbor classifier; there exists an optimal value of for finite training sample size classification problems. According to the method in [24], in our experiments, 10 nearest neighbors () are considered. The classification results of our proposed approach are shown in Table 3. The diagonal entries in Table 3 show the number of test patterns from different classes which are correctly classified.

From Table 3, we can conclude that the proposed algorithm achieves an accuracy of 94.6 percent for the five-class classification task. For the four-class classification task (where classes A and T were collapsed into one class), an accuracy of 96.8 percent is achieved.

*Experiment*. To evaluate the performance of the proposed algorithm, we have compared the proposed approach to wavelet-based, GLCM-based, and Curvelet-based, respectively. We use wavelet transform to decompose gray images into five scales wavelet coefficients using wavelet bases “Symmlets 4, 5, 6, 8, and 9” and calculate averaged -norm of wavelet coefficients at each scale. Finally, WT feature vector with dimension of 16 are acquired. The reason using “Symmlets 4, 5, 6, 8, and 9” is that in the work of Tico et al. [10] best results were obtained with these five wavelet bases. In Table 4, we show the comparison results for the five-class classification.

From Table 4, we can conclude that our algorithm achieves higher accuracy of classes W by reducing the misclassification of W as L or R. Our algorithm also achieves higher accuracy of classes R by reducing the misclassification of R as A. Finally, our algorithm achieves higher accuracy of classes A by reducing the misclassification of A as T.

Also, Curvelet-based is better than wavelet-based and GLCM-based. The reason is CT can better capture the direction of fingerprint ridge than WT and GLCM. Furthermore, the proposed algorithm can provide much more information on the ridge direction by combining the good statistic performance of GLCM and well capturing the direction of CT.

Most of misclassifications in the proposed approach are caused by heavy noise in the poor quality fingerprints, where it is very difficult to correctly extract Curvelet coefficients.

#### 4. Conclusion

In this paper, we present an efficient fingerprint classification algorithm that uses CT and GLCM to model the feature set of fingerprint. There are two main contributions in this paper. Firstly, we construct Curvelet filter that can smooth the discontinuities of ridges and remove the noise in the original image. As a result, the direction of ridge is well followed. Secondly, in combination with the effectiveness of CT and GLCM, we propose to construct a 53-dimensional feature vector as classifier input that can represent curves singularities and the statistics in fingerprint image with compact feature. We have tested our algorithm on the NIST-4 database and a very good performance has been achieved (94.6 percent for the five-class classification problem and 96.8 percent for the four-class classification problem with 1.8 percent rejection). These good performances of the proposed algorithm could be ascribed to the high information contents of Curvelet features and to the combination of GLCM and CT.

Our system takes about 1.47 seconds on a AMD E-350 PC to classify one fingerprint, which needs to be improved. Since image decomposition (filtering) steps account for 82 percent of the total compute time, special purpose hardware for Curvelet transform can significantly decrease the overall time for classification.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors are grateful for the anonymous reviewers who made constructive comments. This work is supported by the National Natural Science Foundation of China (no. 61203302 and no. 51107088), the Tianjin Research Program of Application Foundation, and Advanced Technology (14JCYBJC18900).