Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6664809 |

Zhiyong Tao, Xinru Zhou, Zhixue Xu, Sen Lin, Yalei Hu, Tong Wei, "Finger-Vein Recognition Using Bidirectional Feature Extraction and Transfer Learning", Mathematical Problems in Engineering, vol. 2021, Article ID 6664809, 11 pages, 2021.

Finger-Vein Recognition Using Bidirectional Feature Extraction and Transfer Learning

Academic Editor: Adrian Neagu
Received04 Nov 2020
Accepted22 Apr 2021
Published03 May 2021


Accuracy and efficiency are essential topics in the current biometric feature recognition and security research. This paper proposes a deep neural network using bidirectional feature extraction and transfer learning to improve finger-vein recognition performance. Above all, we make a new finger-vein database with the opposite position information of the original one and adopt transfer learning to make the network suitable for our overall recognition framework. Next, the feature extractor is constructed by adjusting the unidirectional database’s parameters, capturing vein features from top to bottom and vice versa. Correspondingly, we concatenate the above two features to form the finger-veins’ bidirectional features, which are trained and classified by Support Vector Machines (SVM) to realize recognition. Experiments are conducted on the Malaysian Polytechnic University’s published database (FV-USM) and finger veins of Signal and Information Processing Laboratory (FV-SIPL). The accuracy of our proposed algorithm reaches 99.67% and 99.31%, which is significantly higher than the unidirectional recognition under each database. Compared with the algorithms cited in this paper, our proposed model based on bidirectional feature enjoys higher accuracy, faster recognition speed than the state-of-the-art frameworks, and excellent practical value.

1. Introduction

The fast development of biometric identification technology has made the machine vision application more extensive and in-depth. Meanwhile, with the continual improvement of technology and modern science, there are higher and higher identity authentication security requirements. Among them, finger-vein recognition technology has been widely applied in information security, network payment, and other fields due to living identification and high anticounterfeiting [1, 2]. Based on the above superiorities, finger-vein recognition has attracted more researchers’ attention. Finger-vein identification systems usually consist of two processes, namely, feature extraction and matching. Finger veins contain much irregular texture information, shaded parts, and noise. Finger-vein images of the same finger have similar information, but there is a significant variance between different fingers. Therefore, people usually select functional patterns from finger veins and matching strategies for recognition.

The existing finger-vein models can be roughly separated into two categories: nonlearning and learning models. In the nonlearning model, Gabor filters [3] were mostly applied for finger-vein feature extraction. When extracting binary vein texture information, the adopted algorithm is Local Binary Patterns (LBP) [4] or the improved LBP, called Line Local Binary Patterns (LLBP) [5]. Generally, the feature point extraction exploits the Scale-Invariant Feature Transformation (SIFT) approach [6]. Researchers often employ the above ones as feature extractors and Euclidean distance [7] as the final matching strategy. Since each point of the SIFT algorithm corresponds independently to another, it is usually robust to finger-vein deformation.

Nonlearning methods are still easily affected by the quality of finger-vein pictures, texture deletion, and other problems, causing low robustness. To solve these problems, people began to introduce algorithms based on learning models [8]. Such methods are not prone to image quality and location information. He et al. [9] introduced Principal Component Analysis (PCA) to gain the principal components of finger veins and finally applied networks for classification and identification. Khellat-Kihel et al. [10] proposed utilizing the Gabor filter for feature acquirement and SVM [11] for classification and matching. Wu et al. [12] designed a finger-vein recognition network based on SVM. After extracting and resizing the region of interest (RoI), they utilized PCA [13] and Linear Discriminant Analysis (LDA) [14] to shorten the optimal feature dimension and SVM to classify images.

Although the learning methods perform well, there is a need to develop a subjective feature extraction algorithm and decrease the complexity of block execution. Presently, Convolutional Neural Network (CNN) is widely spread in the area of biometric recognition. The advantages of deep feature extraction and strong robustness developed the finger-vein identification system. Ahmad Radzi et al. [15] exploited CNN along with multiple layers to identify finger veins. Meng et al. [16] also applied CNN to find out feature information, but they identified the final output characteristics according to Euclidean distance. Huang et al. [17] utilized VGG16 as the underlying network to learn about the normalized binary feature. Hu et al. [18] proposed FV-Net based on deep learning, which obtained the recognition result by matching the finger-vein feature subregion extracted by CNN. Since then, more and more scholars have applied CNN to extract in-depth finger veins and built models with strong robustness and high classification accuracy [19].

However, the abovementioned finger-vein identification algorithms only exploit the unidirectional database for feature capturing and recognition experiments. The obtained accuracy is nonideal, and it is easy to cause security risks in subsequent practical applications. The emergence of transfer learning [20] improves massive parameters and the slow convergence during CNN training. These studies have laid a theoretical foundation for transfer learning application in the domain of biometric recognition enhancement.

Multimodal fusion technology has always been a research hotspot in biometric recognition tasks to introduce more meaningful information. Miao [21] realized a capable information fusion algorithm of iris and face in feature and score levels, respectively. Yang et al. [22] proposed feature-level cancellable multibiometric system based on fingerprints and finger veins, which provided template protection and revocation. Inspired by the above considerations, we can combine biometric features by score-fusion, pixel-level, or feature-level fusion method for performance enhancement. This paper proposes a bidirectional feature extraction algorithm via transfer learning and feature concatenation for overall finger-vein recognition enhancement.

In this research, taking the published database of Malaysian Polytechnic University (FV-USM) [23] as an example, we have created a new database called by the original finger veins by 180°, consisting of opposite location information. The original FV-USM is named the forward database (A FV-USM), and the rotated database is the reverse database (B FV-USM). Similarly, the original FV-SIPL is called A FV-SIPL, and the rotated database is called B FV-SIPL. These two databases are trained separately in a unidirectional way, the same for the databases generated from our group in Signal and Information Processing Laboratory (FV-SIPL). Then, we adjust these pretrained parameters to detect the in-depth information of different directions and fuse them by concatenation to form the bidirectional feature under the respective databases. Finally, we complete the experiments with the SVM classifier.

The experimental results indicate that the bidirectional feature extraction accuracy on FV-USM and FV-SIPL is 99.67% and 99.31%, respectively, significantly higher than those of the unidirectional models. In the same period, compared with most existing finger-vein recognition algorithms cited in this paper, our model can achieve richer detailed information, higher accuracy, and less time consumption.

2. Materials and Methods

2.1. Transfer Learning and Learning Optimization

Deep learning developed from the initial perceptron neural network. With the development of science and technology, hardware equipment improves the computing capability and calculation training of complex parameter networks. Large-scale data training enhances the network’s intelligence and reduces the reliance on prior knowledge while avoiding overfitting. At present, deep learning has become a research hotspot. It is the most crucial component in the field of artificial intelligence.

Our work utilizes deep CNN to extract finger-vein features and provide more input features for subsequent SVM recognition, effectively improving the model’s accuracy and generalization. Transfer learning [24] is a crucial branch of machine learning, making significant progress so far, especially in image recognition. Transfer learning is the technology that applies the pretrained network from a particular task to another one through parameter adjustment. The issue is to find the correlation between the new and previous problems.

In deep learning, CNN optimization is also a complex but essential process. The following two are commonly applied optimization methods. In order to improve recognition performance, we adopted the first method in this paper.(1)Data enlargement: The CNN training process usually requires numerous data for fitting. In the experiment, it is necessary to reasonably expand finger-vein databases to fit the network and promote robustness.(2)Regularization: The purpose of regularization is to receive minor training errors and robust testing results. Generally, CNN models need to adopt corresponding optimization methods to reduce test set errors, collectively called regularization operations. Regularization can improve the robustness of algorithms and prevent overfitting.

2.2. Feature-Level Fusion

There are two different types of fusion in our experiments, shown in Figure 1. Our proposed feature concatenation method is carried out at feature level. Feature-level fusion [22] is an intermediate-level fusion method. Through specific algorithms, extracted features are simplified to the characteristic with large differences between classes and minor differences within classes for subsequent feature matching and classification decisions.

Additionally, score-level fusion [21] is another fusion strategy in matching level. This fusion method enjoys fast implementation, simple fusion rules, and positive effects. After feature extraction and corresponding matching, there are different matching distances or scores, which are standardized to achieve a unified calculation criterion. According to some specific score-fusion rules, fusion weights contribute to the final result. This method has apparent advantages in multimodal biometric recognition.

In a nutshell, our proposed feature concatenation method can fuse several different feature sets to form more representative feature vectors. The score-level fusion method has the advantages of fast implementation difficulty and high recognition accuracy, low complexity, and fast recognition. The experiments are carried out at feature level and score level to verify our proposed network based on the concatenation feature.

3. Proposed Approach

In this paper, our proposed methodology based on bidirectional feature extraction thoroughly considers the rationality and feasibility of the scheme. The standard CNN image feature extracting process takes order from left to right, from top to bottom. The method in this paper for the bidirectional feature extraction includes those two extraction processes. After this step, we can acquire more features from the same finger-vein images, facilitating subsequent recognition experiments. Theoretically, the more information can be obtained, the more ideal the recognition effect will be, confirmed by subsequent experimental results. In addition, the extracted features of the two images have fixed positional relations. They can be saved by the image registration method, which provides feasibility for further practical applications. The implementation steps are as follows, and the overall flowchart is shown in Figure 2.

Step 1. The first step is acquiring finger-vein images, making a new finger-vein dataset with reverse positions. Before inputting to the framework, they are preprocessed by extracting the region of interest (RoI), normalizing, and image enhancement, detailed in Section 4.1.

Step 2. Following the pretrained structure and parameter migration of Vgg19 and ResNet50, we construct and save the feature extraction framework. Then, along with a pooling layer and a 2048-dimensional fully connected layer, they are regarded as the proposed finger-vein feature extractor, as shown in Section 3.1.

Step 3. The input dataset of A FV-USM, B FV-USM, A FV-SIPL, and B FV-SIPL are inputted to the model, respectively. Correspondingly, the network outputs 2048-dimensional vectors in Feed-Forward, which are utilized to feature unidirectional finger veins, as shown in Section 3.2.

Step 4. We can concatenate the two finger-vein features from the same database and feature extraction method, generating a bidirectional feature. Finally, training and testing processes are completed through the SVM classifier, as shown in Section 3.3.

3.1. Pretrained Model Selection

LeNet5 [25] is the beginning of the CNN research, of which AlexNet [26] and VGGNet [8] are improvements, belonging to the nonbranching network. With the increasing depth of the network, the training process began to show gradient dispersion, overfitting, and other phenomena. Researchers are not only limited to deepening the networks but also considered the width to solve these issues. As a result, several excellent networks such as ResNet [27], DenseNet [28], and ResNeXt [29] were successively put forward. Like [30], various researchers introduced residual calculation to deepen the network and shortcut connections to help fitting.

Meanwhile, learning based on residuals can extract better image features. Based on VGGNet, Hong et al. [8] took the test and train set as input and learned their correlations through training. This method achieved good effects in the finger-vein recognition experiment. Das et al. [31] presented a network with high accuracy and stable performance. The model can ensure stable recognition of finger-vein images of different quality, rotation, scaling, and translation when testing public databases. In summary, choosing VGG19 with the nonbranching structure and ResNet50 with the residual module to train the unidirectional finger-vein database is the most superior choice for our task. We could discuss more details in the experiment section.

3.2. Unidirectional Finger-Vein Recognition Model Using Transfer Learning

The next step is to apply the pretrained VGG19 and ResNet50 to the four databases of A FV-USM, B FV-USM, A FV-SIPL, and B FV-SIPL and tune the parameters suitable for finger-vein recognition. Our work adopts transfer learning for training the unidirectional finger-vein model, shown in Figure 3.

Firstly, the model weights obtained from ImageNet training are applied to initialize the training parameters. The training images of A FV-USM, B FV-USM, A FV-SIPL, and B FV-SIPL were input into the model for parameter tuning, respectively. After multiple iterations and parameter adjustment of CNN, we obtain the model corresponding to each unidirectional finger-vein database, preparing for the subsequent bidirectional feature connection experiments. The similarity between the sample feature of the test and the train set is calculated to get the probabilities during classification and recognition, which are carried out according to the probabilities. To be specific, the Softmax function returns the probability values of all finger-vein categories, where the identification category corresponding to the maximum value is the correct recognized object.

3.3. Proposed Finger-Vein Recognition Model Based on Bidirectional Feature

As shown in Figure 2, the experiment initially extracts the original and finger-vein image characteristic after the position transformation. This process aims to simplify the original complex information and gain meaningful features with large differences between classes and small differences within classes. Then we form a bidirectional feature by concatenation for matching, decision-making process, and the final classification. For example, the bidirectional feature representations consist of the feature vectors extracted by the VGG19 after parameter tuning under A FV-USM and B FV-USM, the same as ResNet50. After the above processing, the model obtained a 4096-dimensional finger-vein feature vector. Similarly, we could capture the bidirectional feature of A FV-SIPL and B FV-SIPL. Finally, we complete training and testing through the SVM classifier, whose results are presented in Section 5.

4. Experiments

The experiments are conducted on FV-USM [23] and FV-SIPL to verify our proposed finger-vein recognition framework using bidirectional feature and transfer learning. We discuss the experimental details and training iterations in this section and the results in Section 5.

4.1. Data Preprocessing

The CNN-based finger-vein recognition process is generally made up of four parts: data loading, image preprocessing, feature extraction, and classification and recognition. The acquisition module is responsible for collecting biometric images. Image preprocessing aims to eliminate noise information in the acquired image, extract the region of interest (RoI) [32], and improve image quality. Our experiments include two databases: the finger-vein database of Malaysian Polytechnic University (FV-USM) and that of our group in Signal and Information Processing Laboratory (FV-SIPL). The introduction of finger-vein databases we conducted experiments on is shown in Table 1.


Number of people12336
Number of hands21
Fingers of each hand2 (index, middle)3 (index, middle, and ring)
Pictures of each finger612
Total number2952 (123 × 2 × 2 × 6)1296 (36 × 1 × 3 × 12)

The finger-vein collection device built by our group in Signal and Information Processing Laboratory adopts the direct light collection method, with the advantages of the closed collection device and high image quality, not easy to be interfered with by external light. Moreover, the collected finger vein does not contain the instrument part without RoI extraction. FV-SIPL collector principle, collection device, and part of the collected finger veins are shown in Figure 4.

The training database in our work includes 150 kinds of finger veins in FV-USM and all pictures in FV-SIPL. The specific division of the required databases is shown in Table 2.

DatabaseTrain setTest set

A FV-USM4 images × 150 classes2 images × 150 classes
B FV-USM4 images × 150 classes2 images × 150 classes
A FV-SIPL8 images × 108 classes4 images × 108 classes
B FV-SIPL8 images × 108 classes4 images × 108 classes

Within the datasets, FV-USM contains many instrument regions and different thickness of people’s fingers, which will affect the subsequent recognition [33]. Therefore, we should conduct a series of operations on the collected images, such as RoI extraction, size normalization, and image enhancement, as illustrated in Figure 5.

As shown in Section 2, image enhancement is a critical step for data preprocessing. This work applies the contrast-limited adaptive histogram equalization (CLAHE) [34] algorithm to improve the finger-vein images. Taking the FV-SIPL database as an example, the image enhancement effect through CLAHE is shown in Figure 6.

4.2. Environment Settings

The training environment is Ubuntu 64-bit operating system, with a memory size of 64 GB, Intel Core i5-5200U CPU, and GeForce GTX Titan-X GPU. Besides, our experiments are based on Python 3.6, with the libraries of Keras and TensorFlow. In the meantime, we expand the database to avoid overfitting while training. Specifically, the finger-vein images are stretched, randomly cropped, and other operations. The training parameter settings are shown in Table 3, where SGD represents Stochastic Gradient Descent [35].


Batch size64
OptimizerSGD (learning rate = 0.001, momentum = 0.9)
Train number32000

4.3. Experiment Iterations

After adopting the VGG19 and ResNet50 models to train the above four unidirectional finger-vein datasets, the loss variation curve and recognition accuracy of the FV-USM and FV-SIPL verification sets are shown in Figures 7(a)7(d) and 8(a)8(d), where all the horizontal axes of Figures 7 and 8 are iteration steps.

The vertical coordinates refer to the loss of verification set of A FV-USM and B FV-USM in Figures 7(a) and 7(c), respectively, and the recognition accuracy in Figures 7(b) and 7(d). Similarly, the vertical axes represent the verification loss of A FV-SIPL and B FV-SIPL in Figures 8(a) and 8(c) and accuracy rate in Figures 8(b) and 8(d).

According to the results we have shown, the loss of ResNet50 converges faster than VGG19 to a stable state after iterations. Our proposed model based on ResNet50 performs higher accuracy and lower loss convergence than VGG19. Compared with ResNet50, the accuracy and loss fluctuation of the VGG19 range broader while training due to the simple and shallow structure.

5. Experimental Results

First, in Table 4, we show the testing recognition accuracy obtained by unidirectional finger-vein recognition model based on VGG19 and ResNet50. Overall, the highest accuracy is from our manmade dataset FV-SIPL without RoI extraction and less marginal information. Objectively, with deeper structure, ResNet50 can achieve a higher recognition accuracy than that of the simple nonbranching hierarchical stacked network VGG19.

DatabaseNetworkAccuracy (%)

A FV-USMVGG1996.33

B FV-USMVGG1996.00



We compare our proposed model based on bidirectional feature extraction and transfer learning. It can be seen from Tables 4 and 5 that the finger-vein feature concatenation algorithm for VGG19 and ResNet50 is improved by various degrees contrasted to the unidirectional finger-vein database under their respective models.

NetworkDatabaseAccuracy (%)Improvement (%)

CNN (VGG19)A and B FV-USM98.001.73
CNN (ResNet50)99.671.36
CNN (VGG19)A and B FV-SIPL99.070.07
CNN (ResNet50)99.310.24

In the FV-USM recognition experiment, the most noticeable improvement in the recognition effect is the feature connection experiment based on VGG19, which reaches 98.00%, 1.73% increase of the A FV-USM. The recognition rate of ResNet50 based on the residual network [30] is 99.67%, which is 1.36% higher than that of A FV-USM. Similarly, in the FV-SIPL recognition experiment, the most obvious enhancement in the recognition effect is still the feature connection based on VGG19. The recognition rate reaches 99.07%, 1.05% better than the B FV-SIPL experiment. The ResNet50 based on the residual network has a recognition rate of 99.31% in the feature connection experiment, 0.24% higher than that of A FV-SIPL.

Furthermore, we extend the experiments to the score-fusion method and select the results with the best accuracy or minimal time consumption for comparison with our proposed concatenated feature. With a little more time-consuming, it achieves improved accuracy within the score division of 5 : 5, shown in Table 6. However, there are difficulties in ascertaining the proper scores and large performance differences in various score distribution. Inappropriate score division may cause huge time consumption. The overall accuracy and time consumption of the score-fusion version could not outperform our proposed method based on feature concatenation.

DatabaseBased networkFeature concatenationScore fusion
Accuracy (%)Time (s)ScoreAccuracy (%)Time (s)

A and B FV-USMCNN (VGG19)98.0016.888 : 296.6713.24
7 : 397.7313.95
CNN (ResNet50)99.6738.429 : 199.3341.48
6 : 499.6781.71

A and B FV-SIPLCNN (VGG19)99.0721.848 : 296.0617.28
5 : 599.5419.50
CNN (ResNet50)99.3143.709 : 198.3846.36
1 : 999.07229.77

Additionally, Table 7 shows the comparison between the proposed and state-of-the-art efficiency of finger-vein recognition algorithms. The proposed CNN method mentioned in [39] denotes an improved structure of LeNet5. The references cited in the table adopt different image feature extraction algorithms or different CNN models.

PaperPublication yearFeature extraction methodDatabaseAccuracy rate (%)Time (s)

Qui et al. [36]2016Dual-sliding window localization + pseudoelliptical transformer + 2D-PCAFV-USM97.02
He and Chen [37]2018Gabor + uniform + LBP + DBNFV-USM97.93229.23
Improved Gabor + uniform + LBP + DBN98.21235.23
CNN (max pooling + improved activation function)96.43
Ding [38]2019Double linear Weber local descriptorFV-USM99.25157.74
Das et al. [39]2019CNN (proposed CNN)FV-USM97.48
Yuan [40]2020Multiscale LBP + DBNFV-USM99.59339.40
In this paper2021CNN (VGG19) + feature concatenationA and B FV-USM98.0016.88
CNN (ResNet50) + feature concatenation99.6738.42
CNN (VGG19) + feature concatenationA and B FV-SIPL99.0721.84
CNN (ResNet50) + feature concatenation99.3143.70

According to the comparative experimental results in Table 7, it can be found that the model based on bidirectional feature extraction presented in this paper has more tremendous recognition advantages than other existing literature. Meanwhile, in contrast to the traditional image extraction algorithm, the model proposed in this paper has more evident advantages in finger-vein recognition effect and time consumption.

The experimental results indicate that the algorithm has a high-accuracy performance for finger veins. It is inseparable from the image feature extraction of this approach to acquire more abundant information. This paper’s method makes up for the lack of in-depth information in previous methods. The work has a positive impact on the subsequent recognition steps. In conclusion, the algorithm in this paper has achieved the state-of-art recognition level with little time consumption and specific practical value.

6. Conclusions

We propose a novel approach based on bidirectional feature extraction to enhance the accuracy and efficiency of current finger-vein recognition algorithms. This model is constructed through finger-vein preprocessing, positive and negative pretrained recognition modules, feature extractors, and SVM classifier. More detailed information and meaningful relations can be detected through the two-direction extraction we proposed, which improves the recognition effect. Experimental results prove that the framework in this paper enjoys high accuracy, which reaches 99.67% testing on FV-USM and 99.31% on FV-SIPL. Our method outperforms most state-of-the-art and classical finger-vein frameworks, shown in Table 7. Future research will be devoted to combining existing theoretical research with practical applications and developing a high-accuracy finger-vein recognition system based on bidirectional feature extraction. How to enhance the performance and robustness without increasing the complicity is a long-term research topic. According to our current work, we can pay more attention to meaningful information acquisition and representations in learning methods. This solution can be implemented to deal with the leakage of biometric information and increase the identification reliability, availability, and security.

Data Availability

The FV-USM database used to support the findings of this study may be released upon application to Dr. Bakhtiar Affendi Rosdi, who can be contacted at The FV-SIPL database is available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


This work was supported by the National Key R&D Program of China funded by the Ministry of Science and Technology of China (no. 2018YFB1403303).


  1. D. Wang, J. Li, and G. Memik, “User identification based on finger-vein patterns for consumer electronics devices,” IEEE Transactions on Consumer Electronics, vol. 56, no. 2, pp. 799–804, 2010. View at: Publisher Site | Google Scholar
  2. J. F. Yang and C. Y. Jia, “Development of embedded finger vein image acquisition system,” Journal of Civil Aviation University of China, vol. 33, no. 1, pp. 50–54, 2015. View at: Google Scholar
  3. J. L. Wayman, A. K. Jain, D. Maltoni, and D. Maio, Biometric Systems: Technology, Design and Performance Evaluation, Springer Publishing Company, Incorporated, Berlin, Germany, 2005.
  4. E. C. Lee, H. C. Lee, and K. R. Park, “Finger vein recognition using minutia-based alignment and local binary pattern-based feature extraction,” International Journal of Imaging Systems and Technology, vol. 19, no. 3, pp. 179–186, 2009. View at: Publisher Site | Google Scholar
  5. K. Su, G. Yang, B. Wu et al., “Human identification using finger vein and ECG signals,” Neurocomputing, vol. 332, no. 7, pp. 111–118, 2019. View at: Publisher Site | Google Scholar
  6. G. Wang and J. Wang, “SIFT based vein recognition models: analysis and improvement,” Computational and Mathematical Methods in Medicine, vol. 2017, Article ID 2373818, 14 pages, 2017. View at: Publisher Site | Google Scholar
  7. M. Vatsa, R. Singh, and A. Majumdar, Deep Learning in Biometrics, CRC Press, Boca Raton, FL, USA, 2018.
  8. H. G. Hong, M. B. Lee, and K. R. Park, “Convolutional neural network-based finger-vein recognition using NIR image sensors,” Sensors, vol. 17, no. 6, p. 1297, 2017. View at: Publisher Site | Google Scholar
  9. C. He, Z. Li, L. Chen, and J. Peng, “Identification of finger vein using neural network recognition research based on PCA,” in Proceedings of the IEEE International Conference on Cognitive Informatics & Cognitive Computing, pp. 456–460, IEEE, Beijing, China, 2017. View at: Google Scholar
  10. S. Khellat-Kihel, R. Abrishambaf, N. Cardoso, J. Monteiro, and M. Benyettou, “Finger vein recognition using gabor filter and support vector machine,” in Proceedings of the International Image Processing, Applications And Systems Conference, pp. 1–6, IEEE, Genevo, Italy, November 2015. View at: Google Scholar
  11. P.-H. Chen, C.-J. Lin, and B. Schölkopf, “A tutorial on ν-support vector machines,” Applied Stochastic Models in Business and Industry, vol. 21, no. 2, pp. 111–136, 2005. View at: Publisher Site | Google Scholar
  12. J. D. Wu and C. T. Liu, “Finger-vein pattern identification using SVM and neural network technique,” Expert Systems with Applications, vol. 38, no. 11, pp. 14284–14289, 2011. View at: Publisher Site | Google Scholar
  13. K. E. Wold and P. Geladi, “Principal component analysis,” Chemometrics and Intelligent Laboratory Systems, vol. 2, no. 1–3, pp. 37–52, 1987. View at: Publisher Site | Google Scholar
  14. S. Balakrishnama and A. Ganapathiraju, “Linear discriminant analysis-abrief tutorial,” Institute for Signal and information Processing, vol. 18, pp. 1–8, 1998. View at: Google Scholar
  15. S. Ahmad Radzi, M. Khalil-Hani, and R. Bakhteri, “Finger-vein biometric identification using convolutional neural network,” Turkish Journal of Electrical Engineering & Computer Sciences, vol. 24, pp. 1863–1878, 2016. View at: Publisher Site | Google Scholar
  16. G. Meng, P. Fang, and B. Zhang, “Finger vein recognition based on convolutional neural network,” Computer Programming, vol. 128, no. 5, p. 04015, 2017. View at: Publisher Site | Google Scholar
  17. H. Huang, S. Liu, H. Zheng et al., “Novel finger vein verification methods based on deep convolutional neural networks,” in Proceedings of the 2017 IEEE International Conference On Identity, Security And Behavior Analysis (ISBA), pp. 1–8, New Delhi, India, November 2017. View at: Google Scholar
  18. H. Hu, W. Kang, Y. Lu, Y. Fang, and F. Deng, “FV-Net: learning a finger-vein feature representation based on a CNN,” in Proceedings of the 2018 24th International Conference On Pattern Recognition (ICPR), pp. 3489–3494, Beijing, China, November 2018. View at: Google Scholar
  19. J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, pp. 1646–1654, Seattle, WA, USA, June 2016. View at: Google Scholar
  20. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010. View at: Publisher Site | Google Scholar
  21. D. Miao, Feature and Score Level Fusion of Multibiometrics, University of Science and Technology of China, Hefei, China, 2017.
  22. W. Yang, S. Wang, J. Hu, G. Zheng, and C. Valli, “A fingerprint and finger-vein based cancelable multi-biometric system,” Pattern Recognition, vol. 78, pp. 242–251, 2018. View at: Publisher Site | Google Scholar
  23. M. S. M. Asaari, S. A. Suandi, and B. A. Rosdi, “Fusion of band limited phase only correlation and width centroid contour distance for finger based biometrics,” Expert Systems with Applications, vol. 41, no. 7, pp. 3367–3382, 2014. View at: Google Scholar
  24. L. Torrey and J. Shavlik, Transfer Learning,” in Handbook of Researchon Machine Learning Applications and Trends: Algorithms, Methods, and techniques, IGI Global, Harrisburg, PA, USA, 2010.
  25. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. View at: Publisher Site | Google Scholar
  26. A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, vol. 25, no. 2, pp. 1097–1105, 2012. View at: Google Scholar
  27. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 4, pp. 770–778, San Francisco, CA, USA, June 2016. View at: Google Scholar
  28. G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708, San Francisco, CA, USA, July 2017. View at: Google Scholar
  29. S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in in Proceedings of the 2017 IEEE Conference on Computer Vision And Pattern Recognition (CVPR), IEEE, Honolulu, HI, USA, July 2017. View at: Google Scholar
  30. Y. S. Lu, Y. X. Li, B. Liu et al., “High based on deep residual network Spectral remote sensing data monitoring,” Acta Optica Sinica, vol. 37, no. 11, Article ID 1128001, 2017. View at: Google Scholar
  31. R. Das, E. Piciucco, E. Maiorana, and P. Campisi, “Convolutional neural network for finger-vein-based biometric identification,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 2, pp. 360–373, 2018. View at: Google Scholar
  32. C. Wu and X. Shao, “Research on finger vein recognition based on deep learning,” Computer Technology and Development, vol. 28, no. 2, pp. 200–204, 2018. View at: Google Scholar
  33. J. Chaki and N. Dey, A Beginner’s Guide to Image Preprocessing Techniques, CRC Press, Boca Raton, FL, USA, 2018.
  34. A. M. Reza, “Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement,” The Journal of VLSI Signal Processing-Systems for Signal, Image, and Video Technology, vol. 38, no. 1, pp. 35–44, 2004. View at: Publisher Site | Google Scholar
  35. Y. B. Goodfellow and A. Courville, Deep Learning, MIT Press, Cambridge, MA, USA, 2016,
  36. S. Qiu, Y. Liu, Y. Zhou, J. Huang, and Y. Nie, “Finger-vein recognition based on dual-sliding window localization and pseudo-elliptical transformer,” Expert Systems with Applications, vol. 64, pp. 618–632, 2016. View at: Publisher Site | Google Scholar
  37. X. He and X. Chen, Research on Key Technologies of Finger Vein Identification, Jiangsu University of Science and Technology, Zhenjiang, China, 2018.
  38. Y. J. Ding, Study on Ferture Extraction Method of Finger Vein Image, Anhui University, Hefei, China, 2019.
  39. R. Das, E. Piciucco, E. Maiorana, and P. Campisi, “Convolutional neural network for finger-vein-based biometric identification,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 2, pp. 360–373, 2019. View at: Publisher Site | Google Scholar
  40. Y. Yuan, Finger Vein Recognition Based on Multi-Scale LBP and MPROVED Deep Confidence Network, Yanshan University, Qinhuangdao, China, 2020.

Copyright © 2021 Zhiyong Tao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.