Abstract

This paper studies the impact of illumination direction and bundle width on finger vascular pattern imaging and recognition performance. A qualitative theoretical model is presented to explain the projection of finger blood vessels on the skin. A series of experiments were conducted using a scanner of our design with illumination from the top, a single-direction side (left or right), and narrow or wide beams. A new dataset was collected for the experiments, containing 4,428 NIR images of finger vein patterns captured under well-controlled conditions to minimize position and rotation angle differences between different sessions. Top illumination performs well because of more homogenous, which enhances a larger number of visible veins. Narrower bundles of light do not affect which veins are visible, but they reduce the overexposure at finger boundaries and increase the quality of vascular pattern images. The narrow beam achieves the best performance with 0% of [email protected]%, and the wide beam consistently results in a higher false nonmatch rate. The comparison of left- and right-side illumination has the highest error rates because only the veins in the middle of the finger are visible in both images. Different directional illumination may be interoperable since they produce the same vascular pattern and principally are the projected shadows on the finger surface. Score and image fusion for right- and left-side result in recognition performance similar to that obtained with top illumination, indicating the vein patterns are independent of illumination direction. All results of these experiments support the proposed model.

1. Introduction

Finger vein recognition, more accurately referred to as finger vascular pattern recognition, is a promising biometric recognition method that has received considerable scholarly attention in recent years. Academic and commercial research has resulted in finger vein imaging devices to make the finger vascular pattern visible. However, there is little published information that explains how the contribution of each element of the imaging process, such as the impact of illumination, the camera, and the finger itself-affects image quality or finger vein recognition performance.

Our previous research [1] has demonstrated that physical modeling using a finger phantom provided better knowledge about the contribution of parts inside the finger to the imaging process. A main finding was that the bone plays a role as a light diffuser in the image projection of finger blood vessels. Therefore, homogeneous illumination along the finger does not require a wide angle, but it is sufficient to illuminate the finger using a narrow beam and diffused by the bone (e.g., a laser [2]). Near-infrared light-emitting diodes (NIR-LEDs) with various large opening angles are commonly used in finger vein scanners.

Finger-vein imaging devices use two types of illumination, namely transmission and reflection, and three directions of illumination, namely from the top, from the side, and the bottom [3]. Transmission illumination, principally, is a penetration approach with the light emitted from the illumination source passing through the finger, where the finger is positioned between a light source and the camera. However, in the case of the reflection approach, both the camera and light source are in the same position (below the finger). Several researchers have attempted to develop imaging devices with various positions of cameras and illumination devices. For example, Hitachi [4] presented the first commercial product using the transmission principle, Zhang et al. [5] presented an academic product using the reflection method, and Hou et al. [6] used side-light illumination in their finger vein scanner. To the best of our knowledge, only a few studies [7] have attempted to quantify the impact of illumination bundle width on the finger vascular pattern. In this paper, we will further study the impact of bundle width and the direction of illumination on image quality and recognition performance.

Most public academic datasets have been produced by finger vein scanners using the transmission principle [810] and only a few using the reflection method [5]. Although some experimental research has been carried out on the impact of light direction, there is very little scientific understanding of the role of illumination in the imaging process. We present a qualitative theory explaining illumination’s role in finger vascular imaging. Using a dataset specially recorded for this purpose, we conduct an experimental assessment of the bundle width and direction, more specifically, to examine how it affects recognition performance and obtain a better knowledge of the finger vein imaging process.

In addition, we will make available the dataset that we used for our experiments. This is a clean controlled dataset consisting of finger vascular pattern images captured with various illumination directions (i.e., top, right-side, and left-side illumination) obtained with wide and narrow bundle widths. The dataset acquisition was under controlled conditions to minimize external factors, such as translation and longitudinal finger rotation. For example, Figure 1 shows examples of images of vascular patterns obtained with different light directions using narrow illumination beams. These images have similar vascular patterns and can be interoperable, provided they are of sufficient quality. We will present a scientific explanation for the similarities and interoperability of finger vascular patterns obtained with different light directions and illumination bundle widths.

The main purpose of our research is to obtain a better understanding of how the interaction between illumination and the physiology of the finger influences image quality. The dataset was solely collected for that purpose. We sought to rule out any other factors that could influence performance in order to be able to study said interaction in isolation. The main contributions of this paper are as follows:(1)It provides a theory that explains the impact of illumination widths and directions on finger vascular pattern imaging and recognition. In particular:(a)A narrow beam gives images with more uniform intensity than a wide beam and has no impact on which veins are visible but reduces overexposure.(b)The light direction has no impact on the visible veins because they are projected on the surface of the finger.(2)It presents the results of experiments that support this theory.(3)It demonstrates the interoperability of acquisition using various light directions.(4)It introduces a clean, controlled dataset that has a consistent position of the finger, minimizes longitudinal finger rotation, and comprises 246 different fingers recorded using various light conditions: illumination widths (wide/narrow beams) and light directions (top, right side, and left side). It is available on request via the link (https://www.utwente.nl/en/eemcs/dmb/downloads/utccfvp/).

The outline of this paper is as follows. A brief review of related work is given in Section 2. Section 3 briefly describes the vascular pattern imaging of the finger. This is followed by experiments in Section 4. Next, Section 5 offers a detailed discussion. Finally, Section 6 summarizes this paper.

Previously [1], we examined the finger vascular pattern imaging process by constructing a physical model obtaining a better knowledge of image formation in the near-infrared range. A part of the physical model was implemented using a phantom, which was built to validate the simulation of the NIR imaging process. The combination of phantom bone, soft tissue, and replicas of blood vessels was used to mimic a real human finger. The study results in a better understanding of finger vascular pattern image formation by learning about the contributions of finger elements, such as the bone, soft tissues, and joints. Therefore, this knowledge may help to enhance the image quality and biometric recognition performance of finger vascular patterns. Here, we will exploit and extend this knowledge.

Until 2000, research on finger blood vessels initially tended to focus on biomedical processing rather than biometrics recognition, so improving the quality of finger vein images was the main purpose of early finger vein detection researchers [11]. Since the finger vascular pattern was proposed in biometric recognition, significant results have been achieved in this area over the years. A good overview of the state-of-the-art and ongoing research is given in Uhl et al.’s [4] study. Several research results, such as presented in Hou et al.’s [6] study, have been achieved in improving image quality and increasing the recognition performance of finger vascular patterns.

In medical imaging, blood vessels are visualized by using four illumination methods, which are X-ray, ultrasound [12], laser [7, 13, 14], and infrared [810, 15]. Infrared is the most widely applied illumination source in biometric finger vein recognition. In particular near-infrared (NIR) with a wavelength of 700–1,000 nm is commonly used in imaging devices. It has been observed that the best recognition performance is achieved with a NIR wavelength in the range of 875–890 nm [16]. Most academic studies use scanners equipped with NIR-LEDs with various spread angles. However, this may degrade the image contrast when the LED light spreads outside of the finger, leading to light leakage along the finger, and overexposure at the finger border [1].

In 2009, Kim et al. [2] were the first to use a laser as an illumination source in imaging finger veins. Their research has shown that NIR-laser can generate finger vascular pattern images that are more uniform than those obtained with NIR-LEDs. Lee et al. [13] reported that using a laser can improve the recognition performance by 60% compared to infrared. Based on the results from our previous research [1] that the finger bone scatters the light, we are able to mimic the NIR-laser property using narrow-beam NIR-LEDs.

Finger vascular pattern imaging devices using NIR-LEDs can be generally classified into two types, i.e., light transmission [10] and light reflection [5]. The light transmission approach is the most widely applied and may produce better images than the reflection approach [9]. NIR-based finger vein scanners can also be divided into four methods based on the illumination direction, e.g., top illumination [9], side illumination [15], bottom illumination [17], and both top-side illumination [18]. Surveys such as that conducted by Hou et al. [6] have shown that top illumination (light source above and camera below the finger) is commonly used to produce academic public datasets. The bottom illumination is identical to the light reflection approach that is rarely used in finger vein scanners because it produces a lower quality image [5, 17].

Ramachandra et al. [18] developed a low-cost sensor to capture finger veins from a dorsal and ventral/palmar view using multiple light directions: top, 2-sided, and both top-2-sided illumination. In addition, they also studied the performance of finger vein recognition using several state-of-the-art methods. Their results show that the finger-vein image captured by both top-2-sided illuminations results in the highest verification accuracy. However, they provide no scientific explanation for why both top-2 side illumination results in the best performance in finger vein recognition.

Researchers have proposed various feature extraction methods and comparison techniques in finger vein recognition. For instance, a systematic review [19] presents that feature extraction methods can be classified into local and global features. The local features are related to lines and point patterns which require a simple comparison technique, such as binary correlation. Global features represent the entire image by a single feature vector and are frequently extracted by neural nets that require training. In this paper, we apply the maximum curvature patterns by Miura et al. [20] to assess the impact of bundle width and direction. This method extracts the points with high curvature in cross-sectional profiles (in each of the four directions of the axis), because it gives a good insight into the visibility and detectability of the veins in the different areas of the finger, allowing us to see where the identity information is located.

van der Spek and Spreeuwers [21] proposed a mathematical formula based on optics to model the projection of blood vessels on the surface of the finger. This knowledge was used to generate fake finger vascular pattern images.

In conclusion, academic and commercial researchers have developed various imaging devices to capture finger vascular patterns. The Illumination source plays an essential role in capturing a high-quality image of finger vascular patterns that can increase finger vein recognition performance. Researchers have experimented with various illumination widths and orientations on their scanners to improve the imaging process without scientific reason. In this paper, we provide explanations for the phenomena observed by others.

3. Vascular Pattern Imaging of the Finger

The imaging process plays a pivotal role in improving the image quality of the finger vein. Good-quality images can help to achieve high-performance finger vein recognition. In particular, understanding the imaging process, including the image formation model and interoperability of finger vein images, and assessment performance will be explained in more detail in the following subsections.

3.1. Finger Vascular Pattern Image Formation Model

In Normakristagaluh et al.’s [1] study, we showed a physical model to simulate the imaging process of the finger vein acquisition resulting in the knowledge that the finger bone is an illumination diffuser. Building on those findings, we made narrow-beam NIR-LEDs generated using pipe covers on NIR-LEDs (Figure 2(a)). This model can help to enhance the imaging process by avoiding light leakage along the fingers and overexposure at the finger boundaries.

In this study, we build on this by developing a qualitative theoretical model (illustrated in Figure 3) in order to study the impact of illumination bundle width and direction. This research looks more deeply at the effects of wide or narrow bundles under varied lighting directions on finger vascular pattern imaging and recognition performance. We will support our model with experiments in the next section.

NIR-LEDs typically have a large opening angle of illumination, which causes light leakage or overexposure at the finger border images. The possible reason for this overexposure is that the light passes through soft tissue surrounding the finger bone [1]. To address this issue, we constructed pipes (made by 3D printer) that covered the NIR-LEDs (red arrow in Figure 4(a)) to produce a narrow beam (Figure 2(a)).

Figure 5 shows that NIR images of finger vascular patterns were captured using the top illumination source with and without pipe bundles (narrow and wide beams). For instance, narrow beams create a uniform intensity in NIR images, allowing us to detect extracted vascular patterns clearly (Figures 5(a) and 5(c)). However, when using wide beams, part of the vascular patterns are not visible due to overexposure (red ellipsoids in Figures 5(b) and 5(d)).

Due to the finger bone’s role as a diffuser, a narrow beam that strikes the bone is spread out evenly throughout the finger, producing uniform illumination of the finger vascular pattern and minimizing overexposure at the finger boundaries. Furthermore, the diffuser effect of this bone makes the image projection of blood vessels close to the skin appear as a shadow, while the vessels farther away from the skin disappear or only produce a semishadow (Figure 3). In other words, the depth of the blood vessels determines the finger’s vascular appearance in the imaging projection, where deeper vessels are not seen, and blood vessels closest to the skin are shown more clearly. This qualitative model of image formation explains why finger vascular patterns are similar when using wide/narrow beams with different directions of illumination: In all cases, they result from indirect illumination by diffusion by the bone casting shadows on the finger surface.

In our research, the finger is illuminated by narrow-beam NIR-LEDs from the direction of the top and one side (right or left side), producing homogeneous NIR finger vascular pattern images. For example, Figure 6(a)6(c) shows that NIR finger vascular patterns (in RGB coloring) were extracted from the same finger using narrow beams with various light directions, i.e., top, right-, and left-side illumination sources. Overlaid NIR finger vascular patterns (Figure 6(d)), which are red, green, and blue for top, right, and left illumination sources, respectively, show that extracted veins have similar patterns although resulting from different directions of illumination. Furthermore, there is enough overlap between the extracted patterns to allow for biometric comparison across different directions of illumination.

In Figure 7(a)7(d), we also present finger vascular patterns resulting from illumination with wide beams (LEDs without pipe covers). The images show the same vascular patterns as obtained with narrow bundles, also with the same overlap (Figure 7(d)). We can see when using illumination from one side (right or left side) that the images have darker parts (Figures 7(b) and 7(c)). However, the brighter area (overexposure) could possibly result from light reflected by the bone, but more research is needed to find the cause. These parts may affect the quality of NIR images and finger vein recognition performance.

According to our model, the vascular pattern is a projection of the blood vessels on the surface of the finger, independent of the angle of the light source or the camera. This means that 3D reconstruction by means of stereovision [22, 23] will only lead to a reconstruction of the projection of the vessels on the 3D surface of the finger and will provide no information on the depth of the vessels.

3.2. Similarity Score

The similarity score has resulted from a displacement of correlation between two sets of binarised feature images of finger vein patterns, e.g., register image and input image . A data template part of is defined as a sliding window of the rectangular region , where the upper left position is and the lower right position is . The word displacement refers to the optimum offset, which uses a correlation binary image in Formula (1), in order to find where a segment of data template fits in the whole without padding zeros around the image. These optimal offsets are represented by the symbols and . The correlation , which indicates the difference between the registered and input data at the positions where intersects with , is determined as follows:

matrix, which represents the correlation’s maximum value, is normalized and utilized as a similarity score [10, 20]. The normalization is done as follows:

The indices of the highest value in the correlation matrix are represented by the indices and in this equation. The score value falls between .

3.3. Interoperability of Finger Vascular Pattern Imaging

In principle, images collected under different illumination angles can be used interoperably in vascular pattern recognition since the same vessels become visible as projections on the surface. The fact that this does not always work well [24] is due to image quality issues. Arican et al. [24] applied the maximum curvatures algorithm to compare finger NIR images captured by various finger vascular pattern imaging devices, i.e., using four academic devices and one commercial product. Their experiment showed similar finger vein patterns that were captured by different finger vein scanners (Figure 8). Furthermore, their findings presented that cross-sensor error rates were high due to external factors in a generated dataset, such as lens distortion, longitudinal rotation, and translational shift (finger misplacement) in different devices.

Since the vascular pattern is a projection of veins near the skin, variation in light orientation does not affect it. For example, Figure 9(a)9(c) illustrate extracted vein patterns of the same finger with a narrow beam illuminated by various light directions, i.e., top (red), right-side (green), and left-side illumination (blue), respectively. These images show similar finger vascular patterns, which means that, in principle, there can be interoperability across different directions of illumination.

The current acquisition system does not support simultaneous left–right illumination. The LED light source is positioned on top or to one side of the finger (more details are shown in SubSection 4.1). Therefore, we adopted a fusion approach to simulate the results. In this work, we used three approaches-score fusion, nonaligned image fusion, and aligned image fusion to conduct a fusion method to evaluate the interoperability of different directions of illumination from the right and left sides. First, we use max score fusion because the risk of increasing false matches is low. As a result of taking the highest score, the genuine (mated) will be changed, and the scores will rise. We have the image using illumination from the left and right sides, and suppose the left has very few veins and the right has quite a number. The fusion score will decrease if we use, for instance, the average score. In that case, the average score would still be lower than the optimal score. Second, we fuse binarised vein images with illumination from the right and left sides to assess the interoperability of side illumination using the “OR” operation (Figure 9(e) with and Figure 9(d) without alignment).

We take two sets of binarised feature images of finger vein patterns, captured using different illumination directions, i.e., the image captured from right side and left side . The displacement of both features is calculated by correlation in Formula (1) where is the input data , and is the registered data . Then, translate the left image to the point of displacement results in . We add the veins together so that we have more veins, as in the following:where is right + left image fusion.

Proper alignment is needed to improve the image fusion. This is because the left and right images were recorded at different moments, so there may be small shifts that can affect the performance of finger vein recognition. The image of fused right–left with alignment has a pattern similar to that resulting from top illumination. This is shown in Figure 9(f) by an overlay between a fused right–left image with alignment (Figure 9(e)) and top illumination (Figure 9(a)).

4. Experiments

We performed several experiments to illustrate the impact of illumination direction and bundle width on the finger vascular pattern image quality and recognition performance under otherwise clean and controlled circumstances. The subsequent subsections will provide further details of the imaging device, the dataset, and the experimental results.

4.1. Imaging Device

Finger vascular patterns are hidden beneath the skin, which requires specific acquisition devices to make them visible. In this work, the acquisition device (Figure 4) has been utilized and customized that was described in Rozendal’s [15] and Veldhuis et al.’s [26] studies. This device is called UTFVPv3 (University of Twente Finger Vein Patterns version 3) and consists mainly of a light source and a NIR camera.

The light source uses eight NIR-LEDs in each strip for top and side illumination using pipe bundles (red and green arrows in Figure 4(a) or without them (Figure 4(b)) in each light source. Light with pipe bundles refers to illumination sources covered by 3D-printed pipes to obtain narrow-beam NIR-LEDs. The box has dimensions of 7 × 8 × 10 cm and a camera with a NIR filter in the box. Two raspberry Pis (yellow arrows on Figure 4) on the back of the box are used for image acquisition. For more information about this scanner, please contact the Database Management and Biometrics (DMB) Group, University of Twente.

4.2. Dataset

We collected datasets using the finger vein sensor mentioned in Subsection 4.1 (UTFVPv3), allowing us to examine the effect of illumination direction and bundle widths on image quality and recognition performance under otherwise clean and controlled conditions. The dataset contains 41 individuals, including age and gender information obtained as metadata, and each participant was assigned a unique identifying number to ensure anonymity. In the acquisition process, six fingers of all participants were captured under controlled conditions to minimize differences in position and rotation angle between three sessions. The time between the session was more or less a half hour. The captured fingers are the index, middle, and ring fingers from both hands.

The finger acquisition protocol was as follows. First, we explained the research on finger vascular pattern recognition to all participants and asked them to fill out and sign the consent form. Next, six fingers of each participant were captured by the UTFVPv3 scanner, starting with the index, middle, and ring fingers of the left hand and then followed by the index, middle, and ring fingers of the right hand. Each finger was captured in three sessions. Each time the finger was positioned in a predefined way using markings on the device (blue arrows in Figure 4). The acquisition process using LEDs with pipes (narrow-beam NIR-LEDs) starts with the top illumination, the right-side illumination, and then the left-side illumination sequentially. Then, the same procedure was repeated for the illumination source without pipes (wide-beams NIR-LEDs). Thus, 246 different fingers generate 41 (participants) × 6 (fingers) × 3 (light directions) × 2 (wide/narrow beam) × 3 (sessions) = 4,428 images.

Figure 10 shows an example of the dataset with various light directions (top, right, and left illumination source) and bundle widths (wide and narrow beams) for the same finger. Three imaging sessions result in slightly varied finger vascular patterns for either wide (Figure 11) or narrow beams (Figure 12). Furthermore, employing wide-beam illumination, some veins are not visible as can be observed in the overexposed area of NIR images (Figures 10(d)10(f)).

Experiments consist mainly of two parts, i.e., assessment of using single-direction and cross-direction. Single-direction makes comparisons between images using top–top, right–right, or left–left direction of illumination. Cross-direction comparisons combine light source directions (one by one), for example, top versus right side, top versus left side, and right versus left sides. In this paper, we will use the terms mated and nonmated [27] instead of genuine and impostor. The number of mated and nonmated comparisons for all light directions in single-direction experiments is 738 and 271.215, respectively. In cross-direction, the number is 2.214 for mated and 542.430 for nonmated.

4.3. Expected Results

In the previous sections, we presented a model that describes image formation for finger vein recognition based on NIR illumination. From this model, it follows that it is expected that the shape of the vein pattern does not change for varying illumination directions. The brightness and contrast of the veins and background may vary, though, because of the varying path length the light has to travel. In addition, the model predicts that a narrow beam of light suffices to illuminate the full finger because the light is scattered effectively by the bones. The advantage of using a narrow beam of light over a wide beam is that the likelihood of overexposure near the boundaries of the finger is much lower. The fact that the vein patterns themselves do not change suggests that images obtained using different illumination directions can be compared to verify if they show the same finger, resulting in the interoperability of finger vein scanners using different illumination approaches.

In order to investigate the correctness of the model, experiments were conducted using illumination from the top, left, and right directions and with narrow and wide beams. In the first experiment, we investigate the recognition performance when two finger vein images are compared that are acquired using the same illumination direction (single-direction). The second experiment presents results for the case when the illumination directions of the two images that are compared are different (cross-direction). The third experiment investigates the fusion of left and right illumination. We present three different fusion approaches: score fusion, and image fusion without and with alignment. All experiments are carried out with wide and narrow beams, and the false nonmatch rate at false match rate ([email protected]%, 0.1%, and 1%) of the Miura finger vein comparison method [20] with a confidence interval is used as the performance metric.

Let the standard normal density and distribution functions be denoted by and , respectively. In this paper, we denote . The confidence interval for the standard normal approximation is provided by Brown et al. [28]:

For a general problem, this interval is determined by inverting the acceptance region of the widely recognized Wald large-sample normal test:where is a generic parameter, is the maximum likelihood estimate of , and is the estimated standard error of . In binomial case, we have (proportion of “successes,” sample size), . This study used the Wilson interval to calculate a 95% confidence interval, denoted by the following equation:

We expect that through all experiments the narrow beam illumination results in better recognition performance than the wide beam. Furthermore, it is expected that single-direction from the top illumination results in the best performance, because it results in a more homogeneous illumination and thus a larger number of visible veins. We expect that the cross-direction recognition performance is worse than the single-direction because the side illumination causes some veins to be less clearly visible. This is worst for the comparison of left-to-right illumination where only the veins in the middle of the finger will be clearly visible in both images. Finally, a fusion of left and right illuminated images results in a complete vein image again, and performance should be close to top illumination. The [email protected]%, 0.1%, and 1% with 95% confidence intervals for all the experiments are given in Table 1.

4.4. Experiment 1: Single-Direction Evaluation

We investigate each illumination direction as it is grouped in the single-direction evaluation to see the impact of bundle width. According to Table 1, employing the wider bundle consistently results in substantially higher [email protected]% up to a factor of 5 or more. Note that only for the single-direction of left-side conditions, there is a small overlap of the confidence intervals at FMR = 1%. This means that for the single-direction case, the narrow bundle can significantly improve the recognition performance.

Figures 1315 show the histograms of mated and nonmated scores corresponding to single-direction evaluation, i.e., top, right-, and left-side illumination, respectively. Top illumination results in a large difference in [email protected]% between narrow and wide beams (Table 1), as evidenced by the different overlaps of mated and nonmated in the similarity score histograms (Figure 13). The nonmated-wide beam scores have a broader range skewed to the right than the nonmated-narrow beam, and the mated-wide beam scores show a longer tail to the left (lower scores) compared to the mated-narrow beam. In the case of illumination from the top, [email protected]%, 0.1%, and 1% for the narrow beam are much smaller than the wide beams.

The mated versus nonmated score histograms for right- and left-side illuminations show similar behavior (Figures 14 and 15). Mated and nonmated wide beams have a wider score range than the narrow beams for both sides of lighting, but the nonmated-wide beam with right-side illumination is more skewed to the right than the left-side illumination. [email protected]%, 0.1%, and 1% for the narrow beam using one side illumination are again much lower than the wide beams (Table 1).

Generally, histograms show that all illumination directions in single-direction evaluations have more or less equal score value ranges. However, the highest scores for one side (right or left side) illumination are still lower than for the top illumination.

4.5. Experiment 2: Cross-Direction Evaluation

The performance of comparing top versus left-side, top versus right-side, and right-side versus left-side illumination is also displayed in Table 1. We can see from the table that cross-direction results in degraded performance with significantly higher false nonmatch rates compared to single-direction evaluation on both narrow and wide beams. Note that there is no overlap between the FNMR confidence intervals of corresponding FMRs. This also means that for the cross-direction case, the narrow bundle can significantly improve the recognition performance.

A comparison between the top and one-side illumination (either right or left) shows that the mated scores histogram for the wide beam illumination is shifted to the left significantly, resulting in a large overlap of mated and nonmated score histograms (Figures 16 and 17). The false nonmatch rate is more or less comparable when top illumination is compared to either right- or left-side illumination, even though [email protected]% of the top versus right side has three times lower than the top versus left side (Table 1). In cross-direction illumination, [email protected]%, 0.1%, and 1% for narrow beams are substantially lower than for wide beams.

The comparison of right to left side illumination using wide beam results in the highest [email protected]% and also shown by large overlap histograms of mated and nonmated scores in Figure 18. We can see that the mated-wide beam moved to the left even more. Furthermore, the maximum comparison score of the mated-narrow beam in this histogram is about 0.05 lower than the top versus one-side illumination (right or left side direction). Also, all cross-direction evaluations using narrow beams present the same nonmated score ranges (between 0.05 and 0.12) and more comparable mated score ranges.

In addition, 95% confidence intervals of false nonmatched for all cross-direction experiments for both wide and narrow beams are also shown in Table 1. All of the cross-direction’s false nonmatched rates lie between the lower and upper endpoints of the confidence interval. The lower endpoints using a wide beam for all cross-direction experiments are higher than the upper endpoint using a narrow beam in [email protected]%, 0.1%, and 1%.

4.6. Experiment 3: Fusion

As mentioned before, we investigated three approaches for the fusion of left and right illumination: score fusion, nonaligned image fusion, and aligned image fusion. These fusion performances are displayed in Table 1, which shows that the narrow bundle in the fusion case can improve the recognition performance relative to using wide beams. Note that in almost all fusion cases, there is no overlap between the false nonmatch rate confidence intervals of corresponding FMRs. This also means that the narrow bundle can significantly improve the recognition performance. However, for a false match rate equal to 1%, that significance cannot be proven because there is an overlap in the confidence intervals due to small data sets.

Figure 19 shows mated and nonmated score histograms corresponding to the score fusion of right- and left-side illumination using wide and narrow beams. The similarity scores result in more or less similar ranges compared to the single-direction of right-side or left-side illumination. However, [email protected]%, 0.1%, and 1% of score fusion are much lower than right-side or left-side single-direction of illumination. Using narrow beams consistently reduces the [email protected]% by more than 10 times compared to wide beams. Furthermore, we compare the score of top illumination to the score fusion of the right and left sides using narrow beams, resulting in 0% of [email protected]%, 0.1%, and 1%. Furthermore, this is also shown by the no-overlap for mated and nonmated score histograms (Figure 20).

For image fusion, wider bundles consistently show higher [email protected]% when compared to narrower bundles, up to a factor of 6 or more (Table 1). We also can see score histograms of image fusion in Figure 21 (without alignment) and Figure 22 (with alignment) that show overlap mated and nonmated scores using wide beams are larger than narrow beams. The mated-narrow beam score distribution in image fusion with alignment is moved to the right (Figure 22), producing better performance than in image fusion without alignment (Figure 21). Moreover, image fusion with alignment lowers false nonmatch rate compared to right-side or left-side single-direction of illumination.

Additionally, we investigated the performance of top illumination versus image fusion of right + left side illumination. The right and left images have been recorded at different moments, causing a shift in the captured image. We can see an overlap of mated and nonmated score histograms for top versus right + left nonaligned image fusion; even wide beams have a larger overlap than narrow beams (Figure 23). Therefore, we need a proper alignment to improve the fusion score. This results in decreased false nonmatch rates compared to other cross-direction evaluations. It also presents that using wide beams results in higher [email protected]% compared to narrow beams. Figure 24 shows that mated scores for wide beams are much lower than for narrow beams. The score histograms of aligned image fusion also show that mated-narrow and wide beams shift to the right compared to nonaligned image fusion.

In fusion, false nonmatched rates occur between the lower and upper endpoints of 95% confidence interval for both wide and narrow beams (Table 1). The lower endpoints using a wide beam for image fusion with nonalignment are higher than the upper endpoint using a narrow beam in [email protected]%, 0.1%, and 1%. However, for score and image fusion with alignment, the upper endpoint of FNMR@FMR1% using narrow beams is higher than the lower endpoint using wide beams. Lower false nonmatched endpoints for top versus fused right + left using wide beams are higher on average than upper endpoints using narrow beams.

5. Discussion

In Section 3, we presented a qualitative theoretical model of the finger vein imaging process. The model is based on the absorption and scattering properties of bones and softer tissues: The bone hardly absorbs NIR light and scatters it in all directions. It basically acts as a very diffuse lamp. The softer tissues absorb the NIR light. The blood in the veins contains hemoglobin and absorbs NIR even more than the soft tissues. The result is that only veins close to the surface are visible on the vein images as dark lines and principally are the projected shadows on the surface of the finger. Using this model, we can predict that first, the geometry of the vein patterns does not change for varying illumination directions, and second that a narrow bundle of light suffices to illuminate the full finger because the light is effectively scattered in all directions in the bones. Based on the model, we also predict that using a narrow beam to illuminate the finger reduces the risk of overexposure and improves image quality and the number of visible veins.

Using a series of experiments, we investigated the recognition performance for different illumination directions and beam widths. All results of these experiments support the presented model. The risk of overexposure is the largest for side illumination combined with a wide beam. In this case, the number of visible veins is reduced significantly, resulting in lower verification scores. We also can see (Table 1) that using the wider bundle consistently results in substantially higher [email protected]%, 0.1%, and 1% compared to the narrow bundle.

In a final experiment, we compare images obtained with top illumination to combined left + right illumination and obtain an almost 0% of [email protected]% and 1%. This FNMR is much lower compared to one-side single-direction of illumination (either right or left side). Therefore, this clearly shows that the vein patterns are constant for different illumination directions.

The performances of score and image fusion are better when compared to both single-direction evaluations of right-side or left-side illumination using wide or narrow beams. Moreover, the fusion, especially score and aligned-image fusion, significantly reduces false nonmatch rates, resulting in the same or close to those of top illumination. The alignment is effective in image fusion because left- and right-side illumination was not recorded at the same time, causing a shift in captured images.

For the experiments, we used a newly acquired dataset. We took special care to minimize the rotation and translation of fingers between the different recording sessions using markers on the acquisition device. Rotation and translations are often the cause of false nonmatches. The high quality of the dataset is illustrated by the very low [email protected]%, 0.1%, and 1% (0%) and is also shown by no overlapping mated and nonmated score histograms (illumination from the top).

Since the acquisition was performed under controlled conditions, the use of a narrow bundle improves the recognition results through better image quality. But we have to remark that under uncontrolled conditions, other effects, e.g., rotations of nonuniform illumination, may have a stronger impact on the recognition results. The fact that finger vein images recorded using varying illumination directions and properties result in the same vein patterns makes it likely that, in the future, interoperability between different finger vein acquisition devices can be realized.

6. Conclusion

The impact of illumination on the image quality of finger vein images for finger vein recognition has not been investigated in great depth yet. We propose a qualitative theoretical model based on the observation that the bone scatters the NIR light, and the blood and soft tissues absorb the NIR light, where the hemoglobin in the blood absorbs the strongest. The result is that only blood vessels close to the skin are visible by their projected shadows. The model allows us to predict the effect of illumination on the finger vein images: The pattern is independent of the direction of the illumination, while illumination with a narrow beam improves image quality due to reduced risk of overexposure. We present a series of experiments to validate the predictions using illumination from different directions and beam widths. For the experiments, we acquired a highly controlled dataset with minimal rotations and translations of the fingers between sessions. All experiments support the model as is ultimately shown by an experiment that compares images obtained with top illumination to images obtained with left + right illumination and narrow beams. This results in a similar false nonmatch rate with the top illumination, clearly demonstrating that the vein patterns do not depend on illumination direction, and illumination using narrow beams results in significantly better image quality and recognition performance. We recommend equipping sensors with illumination using narrow bundles and developing more open or compact sensors with side illumination to improve recognition performance.

Data Availability

The data used to support the findings of this study were supplied by the University of Twente (UT) under license and so cannot be made freely available. Requests for access to these data should be made via link https://www.utwente.nl/en/eemcs/dmb/downloads/utccfvp/.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was funded by the Research and Innovation in Science and Technology Project (RISET-Pro) of the Ministry of Research, Technology, and Higher Education of the Republic of Indonesia (World Bank Loan No. 8245-ID) and supported by the National Research and Innovation Agency (BRIN).