Journal of Electrical and Computer Engineering

Journal of Electrical and Computer Engineering / 2016 / Article
Special Issue

Advanced Information Technology Convergence

View this Special Issue

Research Article | Open Access

Volume 2016 |Article ID 7965936 |

Bang Chao Liu, Shan Juan Xie, Dong Sun Park, "Finger Vein Recognition Using Optimal Partitioning Uniform Rotation Invariant LBP Descriptor", Journal of Electrical and Computer Engineering, vol. 2016, Article ID 7965936, 10 pages, 2016.

Finger Vein Recognition Using Optimal Partitioning Uniform Rotation Invariant LBP Descriptor

Academic Editor: Anthony T. S. Ho
Received04 Dec 2015
Accepted14 Mar 2016
Published06 Apr 2016


As a promising biometric system, finger vein identification has been studied widely and many relevant researches have been proposed. However, it is hard to extract a satisfied finger vein pattern due to the various vein thickness, illumination, low contrast region, and noise existing. And most of the feature extraction algorithms rely on high-quality finger vein database and take a long time for a large dimensional feature vector. In this paper, we proposed two block selection methods which are based on the estimate of the amount of information in each block and the contribution of block location by looking at recognition rate of each block position to reduce feature extraction time and matching time. The specific approach is to find out some local finger vein areas with low-quality and noise, which will be useless for feature description. Local binary pattern (LBP) descriptors are proposed to extract the finger vein pattern feature. Two finger vein databases are taken to test our algorithm performance. Experimental results show that proposed block selection algorithms can reduce the feature vector dimensionality in a large extent.

1. Introduction

Biometric systems are automated methods of verifying or recognizing the identity of a living person on the basis of some physiological characteristics, like a fingerprint or iris pattern, or some aspects of behavior, like handwriting or finger vein patterns [1]. There are a lot of advantages to biometrics, such as being hardly loose, difficult to forge, and convenient to use, which have been concerned in all of the world, applying it in identity authentication, exit entry management, security monitoring, electronic commerce, and so forth.

As a new biometric identification method, using hemoglobin in blood can absorb infrared light. The vein pattern will form a shadow in finger vein image [2, 3]. The advantages of a finger vein recognition system (FVRS) contain internal physiological characteristics which are difficult to forge; uniqueness; on-contact or weak contact; no interference by finger surface or surrounding environment; small imaging device [4]. Weaknesses (about the permanent) (effect of changes with age should to be verified), special collection equipment (the design of a finger vein imaging device is relatively complex), and production costs are currently higher.

The original image contains not only vein patterns, but also some mussy shading and noise. Such kinds of interference are produced by different thickness of bones or muscles, and also the scattering of light and finger translation can blur image. Therefore, enhancing the original image and weakening the noise are essential and reasonable. Figure 1 shows some low-quality finger vein samples, including image blurred by slight movement (a); uneven illumination causing the high-light area (b); part of the region being missing (c, d); and finger rotation invariance (e, f).

In order to solve these problems, recent study has provided some effective solutions against the low-quality finger vein images, through the establishment of high-quality finger vein database to improve the system recognition rate [4]. Lu et al. proposed a finger vein ROI localization method that has high effectiveness and robustness against the low-quality and accurate finger region segmentation and correct calculated orientations are calculated to produce higher accuracy in localizing ROIs [5]. The Sobel operator was used for detecting the edge of a finger [6]. Paper [7] utilized Hough transform to extract the binary edge image. In terms of texture feature study, Lu et al. proposed an extended local line binary pattern (LLBP) method, which is named polydirectional local line binary pattern (PLLBP). It can extract line pattern in any orientation to extract line patterns with the most discriminative ability [8]. Other state-of-the-art research results include sift feature [9], maximum curvature [10], multifeature fusion technology [9], segmentation based on local entropy threshold [11], support vector regression (SVR) [12], using convolutional neural network [13], and line tracking [14].

In this paper, based on qualified recognition rate with 99.87% and 99.31% using improved LBP descriptors, we proposed two different block selection methods, based on estimate of the amount of information in each block and the contribution of block location by looking at the recognition rate of each block position to represent discriminative ability for each block, and utilized the more powerful image local regions as a new proposed feature vector for identification. There are four contributions in this paper.(1)Databases. We utilize two high-quality finger vein databases which are established in our lab and named MMCBNU_6000 [15] and MMCBNU_2C [1]. The first one contains 6000 finger vein images captured from 100 volunteers. The second one has 6976 samples collected from 109 volunteers.(2)Individual Block Feature Analysis. This paper firstly pays attention to discriminative ability of each individual block. The discrimination ability of some small blocks exceeded our expectations such that a size of local block image can give out recognition rate of 80% using uniform LBP descriptor with a 59 dimensionality feature vector.(3)Block Selection Mode. We proposed two block selection methods based on the estimate of the amount of information in each block and the contribution of block location by looking at the recognition rate of each block position to reduce feature extraction time and matching time.(4)Feature Extraction. Uniform rotation invariant LBP (LBPRiu) uses less dimensionality to represent more finger vein image features, combining with block selection mode to reduce the feature vector dimensionality in a large extent.

The rest of this paper is structured as follows. Section 2 briefly introduces a finger vein identification system in which our motivation and system architecture are included. The proposed block selection methods are described in Section 3. Then, Section 4 presents the experimental results. Finally, conclusion and future work are given in Section 5.

2. Finger Vein Identification System

2.1. Motivation

LBP is a type of local feature descriptor which is powerful to represent the finger vein local feature information, but it is not suitable to describe the global feature. In general and according to our experimental results, the more complex blocking structure, in a word, the more blocks, can have the more discriminative feature vector to get better recognition result. But too many blocks will produce two serious problems: longer feature extraction time and larger feature dimension based on standard LBP descriptor, 12 blocks: dimensions, 18 blocks: dimensions. On the other hand, high-dimensional feature vectors need larger database for storage in the computer. Last, there must be some noise and low-quality parts in a finger vein image, which will degrade the system recognition performance.

According to the above discussion, we consider that some of the blocks are unnecessary and useless for a finger vein recognition system. So, we try to find a method which is able to delete some blocks, while there is no effect on the recognition rate. As a result, under the premise of maintaining the recognition rate, the feature extraction time and the matching time will be greatly reduced, thereby improving the finger vein recognition system performance. Figure 2 shows some small blocks with different size structures in finger vein images.

2.2. System Composition

A typical finger vein identification system involves four main modules: data acquisition, image preprocessing, feature extraction, and matching. Figure 3 shows a typical biometric system structure diagram.(1)Image Acquisition. The first step of image processing is to get the experimental image database, and the quality of database directly affects the final identification performance. Preprocessing and postprocessing can solve some low-quality problems [16], but if image quality is too low, simple postprocessing will solve the problems difficultly.(2)Preprocessing. It is a crucial process in the whole identification system. The main steps contain ROI localization, denoising, alignment, and enhancement.(3)Feature Extraction. It is the core module of a finger vein recognition system (FVRS), which determines the performance to a large extent, including both recognition accuracy and processing time.(4)Matching. Different types of matching distance are applicable to different feature extraction methods and suitable feature matching distance will greatly improve the system recognition rate.

2.3. ROI Localization and Image Enhancement

In this paper, ROI localization is based on [5]; all the images are normalized, aligned, and calibrated with the resolution of . The ROI localization method is of high effectiveness and robustness against image translation, orientation, scale, scattering, finger structure complicated background, uneven illumination, and collection posture, and the main steps include localize ROIs, segmentation, orientation correction, and ROI detection. All factors of slight movement of the finger, different finger thickness, and uneven illumination will lead to the fact that finger vein image contrast is small and not distinct enough for identification. In this ROI localization algorithm, contrast-limited adaptive histogram equalization (CLAHE) is utilized to enhance the image quality. Figure 4 gives a sample of ROI localization.

3. Feature Extraction

This section will focus on the proposed feature extraction method based on block selection modes. First of all, one fundamental explanation for LBP descriptor is given here [17] as shown in Figure 5.

The binary code is the corresponding feature value and multiscale texture analysis with changing and values. Formula description will be as follows:where is the gray value of centre pixel, represents the neighbors, and shows the sampling radius.

3.1. The Traditional Feature Extraction Method

Based on ROI database [1, 15], according to traditional texture feature extraction method, now using uniform LBP descriptors [18], ROI images are divided into blocks with structure, to extract feature vector for each block. Formula description is as follows: where is the final extracted feature vector and represents the histogram for each block and is the total number of blocks. For example, if we use uniform LBP descriptor to extract the feature, given , in this case, there are 18 blocks (); each block corresponds to histogram with 59 bins (a 59-dimensional feature vector), so the final uniform LBP feature vector is a 1062-dimensional feature vector ().

3.2. Estimate of Amount of Information in Each Block

The idea of the first block selection method is to estimate the amount of information in each block. The block which contains more finger vein patterns can have more discriminative ability. In contrast, if the block just includes background pattern or noise, it would be useless for a finger vein recognition system. Inspired by this thinking, we calculate the amount of the finger vein patterns in each block based on finger vein binary image. The first block selection principle flowchart is shown in Figure 6.

In order to estimate the amount of information in each block, firstly, we need to segment finger vein image into binary image. However, it is hard to segment a finger vein pattern due to various vein thickness, illumination, low contrast region, and noise existing effects [19]. At least, the preset-fixed and global threshold is not appropriately used to segment finger vein pattern. Furthermore, there are always some pseudovein section and noise in segmented image, which will disturb the image thinning step such as the small pseudovein along the edge and small annular regions. Niblack algorithm is a local threshold method based on the calculation of the local mean and of local standard deviation [20]. The threshold can be decided by the following formula:

and are the average of a local area and standard deviation values, respectively. The size of the neighborhood should be small enough to preserve local details, but at the same time large enough to suppress noise. The value of is used to adjust how much of the total print object boundary is taken as a part of the given object. The segmented images using Niblack threshold are shown in Figure 7.

Next, based on segmented images we create the finger vein skeleton images using morphology thinning algorithm [21] which is the transformation of a digital image into a simplified but topologically equivalent image. We can find that there are some noise and line fuzz in the binary images. The noise and line fuzz will result in the incorrect skeleton structure in Figure 8(b). Last, we proposed to use morphology opening operator for noise reduction [22] and line fuzz removal, as shown in Figure 8(c).

Based on skeleton image, it is divided into several small blocks; in Figure 8(d), there are 18 blocks. Then, the number of foreground pixels for each block is estimated saving all the results by descending order into a new array :where is the new final extracted feature vector, represents the histogram for each block, and is the total number of blocks which are used. If we do not use all the blocks, the dimensionality of feature vector will be smaller than the dimensionality of feature vector (formula (2)). We require that the two recognition rates with the two feature vectors ( and ) are equal.

3.3. Contribution of Block Location by Looking at Recognition Rate of Each Block Position

In a biometric system, we are faced with a separate database, which means, in such a closed database, all images are marked over. Therefore, we are able to know exactly which one finger vein image is captured from which individual finger. In the study of database quality assessment, calculating the recognition rate to evaluate biometric database is a common method [23], especially in fingerprint database quality evaluation [24]. In block selection mode based on binary image, we can find that there is no finger vein pattern in some thinning blocks. As an intuitive consideration, we will think that it is useless because there is nothing in the block. But those kinds of regions can also afford some discriminative ability for recognition to a certain degree.

The low recognition rate blocks are abandoned and we use the blocks with the maximum recognition rate as a new feature vector, as shown in Figure 9.

Same as the first block selection method in this case, we utilize each block recognition rate consisting of the array order to replace the previous one in formula (4). Other processing details are the same as the first block selection method.

3.4. Matching

Lots of kinds of “distance” can evaluate the similarity between two histograms, for example, correlation distance, chi-square coefficient, intersection coefficient, and Bhattacharyya distance [25]. Generally, if the system was asked to achieve a high speed but not very accurate recognition rate, intersection method shows good result. In contrast, chi-square distance would be a better choice. In this paper, we utilize the histogram intersection method to measure the similarity between two histograms. The formula is defined as follows: where is the total number of the bins in the histogram.

4. Experimental Results

In this section, we apply the proposed block selection method to extract features based on two finger vein databases, MMCBNU_6000 and MMCBNU_2C. Firstly, the two databases are described in detail, and then compare uniform rotation invariant LBP recognition performance with other types of state-of-the-art methods. Last, experimental results analysis and future works are given.

4.1. Databases

MMCBNU_6000. There are 100 volunteers, using each subject’s index finger, middle finger, and ring finger of both hands. Each finger is captured 10 times, so there are 60 finger vein images for each volunteer. In total, one has images with the resolution of .

MMCBNU_2C. There are 109 volunteers; each subject was asked to afford 8 images with both index finger and middle finger. Because there are two cameras installed on the device, two subdatabases can be established. Left and right databases contain images for each set with the resolution of . According to the database, we can find that the two images which are captured from one finger with different angles are greatly different. That means we can combine the left and right databases into a more discriminative finger vein database based on image fusion technology [26]. Similarly, we can also use each of them to utilize it for identification or verification.

All images are normalized, aligned, and calibrated with the resolution of . The list of comparison feature extraction algorithms is as follows:(1)LBPu2: uniform LBP;(2)LBPri: rotation invariant LBP;(3)LBPriu2: uniform rotation invariant LBP;(4)GLCM: gray-level cooccurrence matrix [27];(5)HOG: histogram of gradient [28];(6)LDC: local direction code [29];(7)LLBP: local line binary pattern [8];(8)Curvature: mean curvature [30].

4.2. Experimental Results on Database MMCBNU_6000

Each finger corresponds to ten finger vein images, taking the first half of the ten images as the training set and the remaining half as the test set. So, each of the two sets has 3000 images. All the images in the test set would be compared with each image in the training set. The total number of the matching is . Genuine pairs = and imposter pairs = . Recognition rate, FRR, FAR, and EER are used as the evaluation criterions.

The experiment results can be affected heavily using different partitioning structure, from Table 1 shown; the structure of gets the best matching performance, 99.87%, with the minimum dimension of corresponding to the uniform rotation invariant LBP.

Block structureRecognition rate (%)
LBPu2 (59)LBPri (36)LBPriu (10)LBPVu2 (59)LBPVri (36)GLCM (6)


It should be noted that sometimes increasing the complexity of the block structure will not bring higher recognition rate, instead of a lower recognition result. The reason is that blocks with low-quality would interfere with the finger vein pattern matching. Therefore, we must find out the useless blocks and remove them. The comparison recognition results and EER results using state-of-the-art methods are shown in Table 2.

rate (%)


AlgorithmEER (%)


Figure 10 shows the ROC curves using different feature extraction algorithms. LBPu2 gives out the best performance; then LBPri, LBPriu2, and HOG remained at the same recognition level. Next, we can find Curvature, GLCM, and LLBP.

Uniform LBP outperforms other algorithms, performing the smallest EER results, but also it takes longer processing time and larger feature dimension than LBPri and LBPriu2.

4.3. Estimate of the Amount of Information in Each Block

According to the contents of each finger vein pattern block, we choose blocks with the most contents to generate a new feature vector; results are shown in Table 3. In conclusion, we do not need to use all feature information for identification, because lots of information are useless and waste feature extraction and matching time. Similar to uniform LBP, just using 8 blocks can reach the best recognition results with 99.87%. Considering the number of blocks and feature dimension together, uniform rotation invariant LBP gives the best performance result with 130 dimensions.

(a) Uniform LBP

Block numberRecognition rateFeature dimension


(b) Rotation invariant LBP

Block numberRecognition rateFeature dimension


(c) Uniform rotation invariant LBP

Block numberRecognition rateFeature dimension


(d) GLCM

Block numberRecognition rateFeature dimension


4.4. Contribution of Block Location by Looking at Recognition Rate of Each Block Position

Firstly, we test independent block recognition rate for each block and then combine the proposed higher discriminative ability blocks as a new feature vector to reduce the dimensionality further. Table 4 shows the result.

Block location numberLBPu2 (59)LBPri (36)LBPriu2 (10)GLCM (6)


It is clear that the discriminative ability of each block is greatly different, being over 10%. The generation of a new feature vector is according to Table 4.

Block selection based on recognition rate shows better performance than other methods. In particular, uniform LBP only used 5 blocks to get the best matching result with 295 dimensions and uniform rotation invariant LBP reduces the feature content into 100 dimensions, shown in Table 5.

FeaturesBlock numberHighest recognition rate (%)Feature dimensionality


4.5. Experimental Results on Database MMCBNU_2C

The experimental processing is the same as the previous steps. Firstly, we utilize all 18 feature blocks to test recognition performance as shown in Table 6, and uniform LBP keeps the best recognition result.

AlgorithmsLeft recognition rate (%)Right recognition rate (%)


Since this database is based on two cameras and, in the process of image collection, fingers will change more in rotation, then quality of MMCBNU_2C is lower than the first database. ROC curve is drawn in Figure 11.

Left and right databases represent very similar results. Uniform LBP feature outperforms other algorithms, recognition rate with 99.43% and EER with 1.34% (Table 7).

AlgorithmsEER (%)

LBPu2 (left)1.34
LBPu2 (right)1.73
LBPri (left)5.34
LBPri (left)5.06
LBPriu2 (left)5.44
LBPriu2 (left)5.02

4.6. Block Selection with MMCNBU_2C

Directly use independent block recognition rate to select the proposed blocks. Results are shown in Table 8.

FeaturesBlock numberBest recognition rate (%)Feature dimensionality

LBPu2 (left)799.43
LBPri (left)1295.07
LBPriu2 (left)1295.18
LBPu2 (right)899.31
LBPri (right)1295.64
LBPriu2 (right)1395.53

The best matching rate is from uniform LBP; both left and right databases are over 99.30% with 413 and 472 dimensions. From another viewpoint, uniform rotation invariant LBP represents more feature information using less characteristic dimension than other features.

5. Conclusion

In this paper, we proposed two block selection methods based on the estimate of the amount of information in each block and the contribution of block location by looking at recognition rate of each block position to reduce the feature dimensionality at the same time keeping the best recognition rate based on two finger vein image databases. According to the experimental results shown, feature vector dimensionality is greatly reduced compared with original feature vector. However, there are still many problems that should be solved: for example, when we have to deal with an open finger vein database and cannot use block recognition rate to select proposed areas, how can we reduce feature dimensionality to achieve a similar or better recognition result? On the other hand, too much finger vein image rotation variance is still very difficult to process according to the second database recognition results. Next, we will use fusion technology [26] to improve the system performance.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work is supported by the Basic Science Research Program through the Brain Korea 21 PLUS Project and the National Research Foundation of Korea (NRF), funded by the Ministry of Education (2013R1A1A2013778).


  1. Y. Lu, S. Yoon, and D. S. Park, “Finger vein identification system using two cameras,” Electronics Letters, vol. 50, no. 22, pp. 1591–1593, 2014. View at: Publisher Site | Google Scholar
  2. J. Hashimoto, Finger Vein Athentication Technology and Its Future, IEEE, Honolulu, Hawaii, USA, 2006.
  3. Hitachi, Finger Vein Authentication: White Paper, Hitachi, 2006.
  4. R. Raghavendra, J. Surbiryala, K. B. Raja, and C. Busch, “Novel finger vascular pattern imaging device for robust biometric verification,” in Proceedings of the IEEE International Conference on Imaging Systems and Techniques (IST '14), pp. 148–152, Santorini, Greece, October 2014. View at: Publisher Site | Google Scholar
  5. Y. Lu, S. J. Xie, S. Yoon, J. Yang, and D. S. Park, “Robust finger vein ROI localization based on flexible segmentation,” Sensors, vol. 13, no. 11, pp. 14339–14366, 2013. View at: Publisher Site | Google Scholar
  6. F. Zhong and J. Zhang, “Face recognition with enhanced local directional patterns,” Neurocomputing, vol. 119, pp. 375–384, 2013. View at: Publisher Site | Google Scholar
  7. Z. Zhongbo, M. Siliang, and H. Xiao, “Multiscale feature extraction of finger-vein patterns based on curvelets and local interconnection structure neural network,” in Proceedings of the 18th International Conference on Pattern Recognition (ICPR '06), pp. 145–148, IEEE, Hong Kong, August 2006. View at: Publisher Site | Google Scholar
  8. Y. Lu, S. Yoon, S. Xie, and D. S. Park, “Finger vein identification using polydirectional local line binary pattern,” in Proceedings of the 4th International Conference on ICT Convergence (ICTC '13), pp. 61–65, IEEE, October 2013. View at: Google Scholar
  9. H. Qin, L. Qin, L. Xue, X. He, C. Yu, and X. Liang, “Finger-vein verification based on multi-features fusion,” Sensors, vol. 13, no. 11, pp. 15048–15067, 2013. View at: Publisher Site | Google Scholar
  10. N. Miura and A. Nagasaka, “Extraction of finger-vein patterns using maximum curvature points in image profiles,” in Proceedings of the IAPR Conference on Machine Vision Applications, pp. 347–350, Tsukuba Science City, Japan, May 2005. View at: Google Scholar
  11. S. Damavandinejadmonfared, “Finger vein recognition using linear kernel entropy component analysis,” in Proceedings of the IEEE 8th International Conference on Intelligent Computer Communication and Processing (ICCP '12), pp. 249–252, IEEE, Cluj-Napoca, Romania, September 2012. View at: Publisher Site | Google Scholar
  12. L. Zhou, G. Yang, L. Yang, Y. Yin, and Y. Li, “Finger vein image quality evaluation based on support vector regression,” International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 8, no. 8, pp. 211–222, 2015. View at: Publisher Site | Google Scholar
  13. S. Ahmad Radzi, M. Khalil-Hani, and R. Bakhteri, “Finger-vein biometric identification using,” Convolutional Neural Network, pp. 1–37, 2014. View at: Google Scholar
  14. N. Miura, A. Nagasaka, and T. Miyatake, “Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification,” Machine Vision and Applications, vol. 15, no. 4, pp. 194–203, 2004. View at: Publisher Site | Google Scholar
  15. Y. Lu, S. J. Xie, S. Yoon, Z. H. Wang, and D. S. Park, “A available database for the research of finger vein recognition,” in Proceedings of the 6th International Congress on Image and Signal Processing (CISP '13), pp. 410–415, Hangzhou, China, December 2013. View at: Publisher Site | Google Scholar
  16. C. Kauba, J. Reissig, and A. Uhl, “Pre-processing cascades and fusion in finger vein recognition,” in Proceedings of the International Conference of the Biometrics Special Interest Group (BIOSIG '14), pp. 1–6, Darmstadt, Germany, September 2014. View at: Google Scholar
  17. Z. Guo, L. Zhang, and D. Zhang, “Rotation invariant texture classification using LBP variance (LBPV) with global matching,” Pattern Recognition, vol. 43, no. 3, pp. 706–719, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. View at: Publisher Site | Google Scholar
  19. M. Vlachos and E. Dermatas, “Finger vein segmentation from infrared images based on a modified separable mumford shah model and local entropy thresholding,” Computational and Mathematical Methods in Medicine, vol. 2015, Article ID 868493, 20 pages, 2015. View at: Publisher Site | Google Scholar
  20. K. Khurshid, I. Siddiqi, C. Faure, and N. Vincent, “Comparison of Niblack inspired Binarization methods for ancient documents,” in Document Recognition and Retrieval XVI, vol. 7247 of Proceedings of SPIE, San Jose, Calif, USA, January 2009. View at: Publisher Site | Google Scholar
  21. R. P. Prakash, K. S. Prakash, and V. P. Binu, “Thinning algorithm using hypergraph based morphological operators,” in Proceedings of the 5th IEEE International Advance Computing Conference (IACC '15), pp. 1026–1029, IEEE, Banglore, India, June 2015. View at: Publisher Site | Google Scholar
  22. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397–1409, 2013. View at: Publisher Site | Google Scholar
  23. E. Tabassi, C. L. Wilson, and C. I. Watson, Fingerprint Image Quality, NIST Fingerprint Image Quality, 2004.
  24. D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition, Springer, Berlin, Germany, 2009.
  25. O. Miksik and K. Mikolajczyk, “Evaluation of local detectors and descriptors for fast feature matching,” in Proceedings of the 21st International Conference on Pattern Recognition (ICPR '12), pp. 2681–2684, November 2012. View at: Google Scholar
  26. G. Goswami, P. Mittal, A. Majumdar, M. Vatsa, and R. Singh, “Group sparse representation based classification for multi-feature multimodal biometrics,” Information Fusion, pp. 1566–2535, 2015. View at: Publisher Site | Google Scholar
  27. Y. Chen and F. Yang, “Analysis of image texture features based on gray level co-occurrence matrix,” Applied Mechanics and Materials, vol. 204–208, pp. 4746–4750, 2012. View at: Publisher Site | Google Scholar
  28. Y. Lu, S. Yoon, S. J. Xie, J. Yang, Z. H. Wang, and D. S. Park, “Finger vein recognition using histogram of competitive gabor responses,” in Proceedings of the 22nd International Conference on Pattern Recognition (ICPR '14), pp. 1758–1763, IEEE, Stockholm, Sweden, August 2014. View at: Publisher Site | Google Scholar
  29. X. Meng, G. Yang, Y. Yin, and R. Xiao, “Finger vein recognition based on local directional code,” Sensors, vol. 12, no. 11, pp. 14937–14952, 2012. View at: Publisher Site | Google Scholar
  30. W. Song, T. Kim, H. C. Kim, J. H. Choi, H.-J. Kong, and S.-R. Lee, “A finger-vein verification system using mean curvature,” Pattern Recognition Letters, vol. 32, no. 11, pp. 1541–1547, 2011. View at: Publisher Site | Google Scholar

Copyright © 2016 Bang Chao Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.