About this Journal Submit a Manuscript Table of Contents
International Journal of Distributed Sensor Networks
Volume 2013 (2013), Article ID 952568, 12 pages
http://dx.doi.org/10.1155/2013/952568
Research Article

Using Extension Theory to Design a Low-Cost and High-Accurate Personal Recognition System

Department of Electrical Engineering, National Chin-Yi University of Technology, Taiping Dist., Taichung City 41101, Taiwan

Received 7 November 2012; Accepted 17 February 2013

Academic Editor: Chao Song

Copyright © 2013 Meng-Hui Wang and Po-Yuan Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

With the advancement in information technology, personal recognition systems have attracted wide attention. With more options of the recognition systems, the recognition rate and price become very important. This paper used palmprint with the extension method to design a low-cost personal recognition system. First, this paper uses a low-cost webcam to capture the image of palmprints, here the length, slop, and distance of principal line of palmprints can be captured by the image process method. Generally, the devices for capturing hand images should have higher-resolution, so their prices are higher. This paper used a low-pixel and low-cost webcam as the capturer, and it had also a high recognition rate that is equivalent in high resolution devices. The recognition algorithm of this study used extension algorithm for hand recognition and was compared with other traditional algorithms and recognition systems. Finally, the experimental results showed that the method proposed in this study has higher recognition rate than traditional algorithms and proved that low-resolution and low-cost capture tools have a high recognition rate as well.

1. Introduction

The biological recognition technology using human body as features gradually replaced traditional personal recognition technology. The traditional personal recognition systems may easily be lost, stolen, or forgotten; the “biological recognition technology” using human body as features has become a new trend [1]. In the past few years, the biological recognition technology has a lot of progress. This past studied include—face [2], iris [3], fingerprint [4], palmprint [5], and palm geometry [6] are used for identification. There are many types of biological features that can be captured, whether the biological recognition system can be accepted by the public depends on its accuracy, forgery prevention, security, convenience, and speed. The most economical and acceptable feature is hand feature. Hands have unique and stable features, and the convenience and security in recognition are obvious. Therefore, most of studies on biological recognition technology aim at hand technology [7].

Hand features can be divided into three major parts—fingerprint, palmprint, and hand geometry. This study captured the major hand feature, palmprint, for identification. The slopes, lengths, and position distances of three principal lines of palmprints were the main features. Fingerprint and fine palmprint are most likely to become vague because of external factors and aging. The uniqueness of hand geometry is lower than the former two features, and it is an unstable factor. The palmprint is unlikely to be indistinct because of age and external factors; thus, it has high stability and uniqueness. This study used a low-resolution webcam to capture and transfer images to the computer for research. Different capture instruments and different recognition methods were compared and discussed.

2. The Design of the Proposed Recognition System

Figure 1 shows the complete hand image recognition flow. First, the webcam captures the hand image, the palmar contour is determined after binarization of the original image, and the palmprint range to be captured is located. The principal line features of palmprint can be captured after image preprocessing of palmprint range, and then the feature values are transmitted to the built database. The extension algorithm is used for identification. The results are displayed on the screen.

952568.fig.001
Figure 1: The structure of the proposed palmar recognition system.

For capturing the hand image, the subject’s fingers should be unfolded naturally, and five fingers cling to the bottom. The fingers should not contact each other or be very close to each other. If the fingers are put together or very close to each other, there will be errors or faults as the captured length, width, and contour features of fingers are influenced, as shown in Figure 2. They cannot be used as data in recognition library, and the database integrity and consistency would be impaired.

952568.fig.002
Figure 2: Images of hand with contacting or too close fingers.

The tool for capturing hand images used in this study is a charge-coupled device. It is very sensitive to light; as long as there is slight change in the light and shadow, there will be large differences in the captured feature values, as shown in Figure 3. When the light is too dark, the hand position and size can hardly be identified, as shown in Figure 3(a). When the light is too bright, the palmar image is distorted and the palmprint cannot be seen at all, as shown in Figure 3(c). The optimal light was adjusted after multiple tests, the hand contour was obvious, and the palmprint was clear, as shown in Figure 3(b). There must be a semienclosed box in order to fix the light, and sufficient and stable light is given inside the box. The box provides proper light for capturing hand images in different places at any time, so as to avoid overly dark or bright light that would result in wrong and unsharp hand capture.

fig3
Figure 3: Hand image with different lights.

3. The Acquired Method of Palmar Images

Since the background noise of hand image is likely to cause misrecognition, the original hand image is separated from the background. The image is grayed, and then the background is expressed as gray-scale value 0 (black). The Otsu’s method is used in this paper [8].

If the gray-scale value range of image is , the gray-scale value is , the obtained number of pixels is , and . The total pixel number of various gray scales is , and the total pixel number is , the probability distribution equation of gray scale is

The threshold divides the pixels of image into two clusters and denotes the pixel cluster with gray-scale value range of ; denotes the pixel cluster with gray-scale value range of . The probability distribution of the two clusters is and , respectively, and the average of the two clusters is and respectively, then The variance of the two clusters is and , and respectively, the total variance is :

If there is a value that minimizes the total variance , this is the optimal gray scale threshold. If the gray-scale value of image pixels is less than value, the gray-scale value is set as 0; if it is greater than or equal to value, the value should not be changed. Equation (4) redistributes gray-scale value to the image to separate the hand image from the background image. Figure 4 shows the difference between the original image and the Otsu’s method processed image. Consider

fig4
Figure 4: Using Otsu’s method to separate palm from background.
3.1. Detecting Palmar Edge

When the palm is separated from the background by using the abovementioned method, the palmar edge is detected. The pixels are scanned from top to bottom and from left to right to ensure that the full image is scanned. During scanning, if the left and right gray-scale values are greater than 0, the pixel is colored red. When the scan in two directions is finished, the complete palmar edge coordinates can be detected, as shown in Figure 5.

952568.fig.005
Figure 5: Scanning direction of hand image.

When the palmar contour coordinates are determined, in order to capture geometric features and palmprint range from hand, the tip points and valley points of fingers should be defined in the palmar edge coordinate set . The tip points are the tip coordinates of five fingers, and the valley points are the valley coordinates between fingers, as shown in Figure 6. To determine the valley points and tip points, a point in the middle of wrist in the coordinate set is set as , and another point is set. The hand contour is moved clockwise from point and returns to point again. The distance between point and point is recorded in the process, and the coordinate distance between two points in the plane is calculated by using Euclid, and are coordinate values in coordinate contour, as shown in (5). The distance distribution is shown in Figure 7. Consider

952568.fig.006
Figure 6: Coordinates of valley points and tip points.
952568.fig.007
Figure 7: Distance between contour and point .

According to the distance distribution in Figure 8, there are five tip and four valley points, corresponding to five tip points and four valley points of hand contour. The tip points and valley points are determined according to the regular variance in coordinate set and . The principle is briefly described below.

952568.fig.008
Figure 8: Distribution condition of the distance between palmar contour coordinates and point .

The characteristic of tip points of palmar contour in distance is that the tip point is larger than former and latter coordinate, and the former and latter coordinate distance decreases gradually. The characteristic of valley points in distance is opposite to tip points. The five tip points and four valley points can be observed clearly in the above conditions.

3.2. Palmprint Rang Locate

According to the valley point coordinates of index finger and middle finger and the valley point coordinates () of ring finger and little finger, two points make a straight line, the distance and slope of the line are and , respectively. The new coordinates are found below the valley point between index finger and middle finger in the same distance, as shown in

The new coordinates () are found below the valley point between ring finger and little finger in the same distance, as shown in (7):

The abovementioned four points are connected to form a square, this square is located as the palmprint region [9]. The palmprint feature range is shown in Figure 9. Thus, even if in different palm sizes and at different time points, the required palmprint range can be captured accurately.

952568.fig.009
Figure 9: Image of the palmprint range.
3.3. Marginalized Technology

The region image processing is also known as image filter or mask image processing. The image range is mostly . The original image is , is defined as image range mask as is an operator acting on , is the processed image, as shown in

masks vertical detect to strengthen edge image, as shown in Figure 10. Large gray-scale value change occurs suddenly in the gray-scale image. Therefore, the edge with large gray-scale value change can be obtained by adjusting the threshold, that is, the required palmprint, as shown in Figure 11.

952568.fig.0010
Figure 10: Vertical detect realized by mask.
952568.fig.0011
Figure 11: Marginalized palmprint range.
3.4. Binarization

Each pixel of gray-scale value processed image has its own gray-scale value. The image binarization means to set a threshold first, and each pixel of the original image is detected during image processing. If it is greater than or equal to the threshold, it turns 1 (white); if less than the threshold, it turns 0 (black). The palmprint range turns black and white to show the palmprints, as shown in Figure 12. The theorem is briefly described below.

952568.fig.0012
Figure 12: Marginalization of the palmprint range image.

The low pass filter and original image are used for convolution integration, and a threshold is set. The binary image is obtained, as shown in

3.5. Noise Elimination

The mask Sobel operation is shown in Figure 13. The Sobel operation is for edge enhancement in different directions. When the threshold is adjusted, the noise can be removed out of the binary image, and the palmprint image can be reserved, as shown in Figure 14.

952568.fig.0013
Figure 13: Mask for Sobel operation.
952568.fig.0014
Figure 14: Palmprint Sobel operation processed.
3.6. Line Method

The line method aims to change three principal palmprint lines to three straight lines to reserve, and the noise is eliminated, so as to capture principal palmprint line features directly.

The block is sought for at the beginning: first, the pixel (0) is detected in the Sobel operated palmprint range, and then the pixels adjacent to the whole palmprint range are sought for, it is a block; and then each block is changed into the longest line. At this point, the palmprint range forms multiple straight lines, it is the line map, as shown in Figure 15. The shorter straight lines are removed, as shown in Figure 16, leaving the principal palmprint lines.

952568.fig.0015
Figure 15: Line map.
952568.fig.0016
Figure 16: Principal palmprint lines.
3.7. Feature Extraction

The features can be captured directly after image preprocessing, including slope, length, and distance. The captured palmprint features are shown in Table 1 and Figure 17.

tab1
Table 1: Palmprints features.
952568.fig.0017
Figure 17: Features of the palmprint lines.

Palmprint range perimeter: the valley points of fingers form a square enclosing the range of palmprint features to be captured. The perimeter of this range is classified as a feature as shown in Figure 9.

Principal line slope of palmprint: the image preprocessing contains line segmentation, as long as any two pixels are determined in each reserved principal line. The needed principal line slope can be obtained, as shown in Figure 16.

Distance to principal line of palmprint: in the image of principal lines of palmprint in the palmprint range, the vertex of palmprint range used in the paper. As mentioned above, the valley point between ring finger and little finger is one of vertices in palmprint range, connected to the vertex on the cross. This straight line then passes through three principal lines. The distances between the starting point of this straight line—the valley point between ring finger and little finger and three principal lines are three palmprint features.

Two lengths of principal line: the three principal lines in the palmprint range are extended and auxiliary lines are drawn. The three principal lines are cut into two, then each principal line has two length features. The different positions and angles of principal lines influence the data significantly.

4. The Proposed Recognition Method

4.1. Extension Method

The elementary theory of extension is the extension theory. The pillars of extension theory are matter-element theory and extension set theory, using matter-element transformation to solve contradiction and incompatibility problems. The extension set uses extension model to handle subjective and objective contradiction problems of traditional mathematics and fuzzy mathematics. The range of fuzzy set is extended from to by using correlation function [1012].

In the extension theory, elements of object consist of the name of object , magnitude of feature , called three elements of matter-element. The matter-element describing objects is shown in (11):

Generally, an object has multiple characteristics, if an object is multidimensional matter-element, and has characteristics, the characteristic vector can be expressed as , and the magnitude vector can be expressed as . The expression of multi-dimensional matter-element is

The traditional classical set uses 0 and 1 magnitudes concept to describe objects with or without some characteristic, whereas the fuzzy theory uses membership function to describe the fuzzy degree in fuzzy set range . The extension set uses the real number of set range extended to to represent the degree of characteristics.

If is domain, and any element in and has a corresponding real number, , then the extension set is defined as where is the correlation function of extension set , and is the correlation grade of with extension set , the range is . The extension set in domain can be expressed as where , , and are positive field, negative field, and zero boundary in extension set, respectively, expressed as

4.2. Recognition Procedure

The matter-element model should be built before recognition. The extension classical domain is set, and then the object is divided into grades of set of values, called extension classical domain of various sets. represents matter-element names of grades, all the characteristics of the matter-element name are . The range of characteristics is represented by , the characteristic size is representing the upper limit of the matter-element set, and is the lower limit of the matter-element set. The upper and lower limits of matter-element are called classical domain.

The matter-element model can be used for recognition, as shown in Figure 18. The recognition procedure of extension algorithm is described below.

952568.fig.0018
Figure 18: Flow chart of identification method.

Step 1. Read the built matter-element model as

Step 2. Set up neighborhood domain as

The neighborhood domain is the total range of whole characteristic set in all matter-element sets, the upper and lower limits of maximum classical domain are determined as (18). The upper limit and lower limit of neighborhood domain are determined, which are and as (19). Consider

Step 3. Read data to be tested as (20).

Where is the name of matter-element to be tested, and the characteristic value of characteristic of is

Step 4. Calculate correlation functions of matter-elements.
In the extension theory, the classical domain and neighborhood domain are two intervals in real number field , the interval belongs to interval . If x is one point in real number field, the correlation function is defined as where

is the correlation grade of with and . The correlation function is shown in Figure 19. If , it is the degree of belonging to ; if , it is the degree of not belonging to .

952568.fig.0019
Figure 19: Extension correlation function.

Step 5. Find out the maximum correlation function, that is, maximum correlation grade.

Step 6. The recognition type is displayed when the maximum correlation grade is greater than or equal to threshold ; on the contrary, no such recognition type is displayed.

During recognition, the recognition system searches for the maximum correlation grade in the optimal matter-element model according to the test matter-element and takes it as the identity of the person. There should be a threshold to review the maximum correlation grade, so as to avoid any identity outside the database entering the database.

As for the setting of optimal threshold, first, an initial threshold is set, and then the false rejection rate (FRR) and the false acceptance rate (FAR) values at this threshold are calculated. FRR is the recognition rate of system misidentifying a person inside database as one outside database; FAR is the recognition rate of system misidentifying a person outside database as one inside database. The sample of false rejection is set as , the sample of false acceptance is set as , the total test sample is set as , and the FRR and FAR are shown as

The threshold is adjusted slowly to observe the variance in FRR and FAR, as shown in Figure 20. When FRR equals to FAR, this threshold is the optimal threshold. The threshold is set as 0.417 in this paper.

952568.fig.0020
Figure 20: FAR and FRR distribution in extension theory.

Step 7. If data recognition is completed, stop; otherwise return to Step 2.

5. Testing Results and Discussion

5.1. Hand Image Recognition System Architecture

Since the capture tool used in this paper is a charge-coupled device-webcam, which tends to be influenced by light, the capture structure is a semienclosed space.

For hand image capture, the hand is put in the semienclosed space with palm up, as shown in Figure 21. A 35° downward light is brightest and sharpest, and the hand can lie on the bottom of the enclosed space gently. Even lying the hand for a long time does not make the subject uncomfortable, a webcam was used to shoot downward.

952568.fig.0021
Figure 21: The semienclosed box.

In order to avoid the influence of ambient light, an LED lamp was fixed inside the semienclosed space to give fixed light. The interior was pasted with light reflecting paper, as shown in Figure 22, so that the light spreads over inside of the semienclosed space uniformly, and the stability of images captured by webcam is higher. In addition, there were two fixed protrusions at the bottom of the semienclosed box, namely, in the index finger-middle finger and ring finger-little finger valley points, so as to fix the distance between palm and webcam. The entire capture device was thus finished.

952568.fig.0022
Figure 22: Inside semienclosed box.

Besides a webcam, the recognition system proposed in this study requires a computer with Visual Basic 6.0 program to process hand images, as shown in Figure 23. The entire image processing, from starting recognition to the end, takes 5 sec.

952568.fig.0023
Figure 23: Man-machine interface of the proposed system.
5.2. Experimental Data

In order to prove the recognition capability of the method proposed in this study, the experiment chose 25 males and 10 females as subjects, totaling 35 subjects, and 20 of them were present in the database. In addition, as the recognition algorithm of this experiment should run in two stages—training mode and recognition mode, the data of all the subjects were divided into training samples and recognition samples. The data of subjects captured in the recognition mode of Stage 1 were training samples, as the weight of each data was determined by recognition algorithm, so as to complete the optimum matter-element model. The recognition mode of Stage 2 compared the hand data outside the completed matter-element model with previously built matter-element model, to check whether the identity was inside the database, if yes, the identity of the subject could be indicated.

The images of each subject’s hand were captured at three different times for objectivity and reality, 5 images each time; thus, each subject had 15 images, 10 of them were training samples, and 5 were recognition samples. There were 250 images as training samples for building the matter-element model, and the rest of 275 images were used to test the recognition accuracy rate of this recognition algorithm, as shown in Table 2.

tab2
Table 2: Experimental data.

The traditional methods were compared with the proposed method, as shown in Table 3. It is observed that the extension theory still has an accuracy rate of 91% even at low resolution, higher than other methods, and there is no need for learning, the feasibility of extension theory is proved.

tab3
Table 3: Recognition results.

In addition, different recognition systems are compared in Table 4, all of them are used for biological recognition, as their recognition rates are higher than 90%, the price becomes the key point. All the recognition systems on the market need an additional computer, and their prices are much higher than the recognition system proposed by this study. Thus, their acceptability in the market would be influenced greatly. Since the public is not willing to pay a high price for the entrance card, a low-price recognition system with high recognition rate would be accepted by the market.

tab4
Table 4: Comparison among different recognition systems.

6. Conclusions

The proposed recognition system is consisted of a webcam and a semienclosed box for fixing light. Visual Basic 6.0 is used to process hand images and capture features for identification. First, when the webcam captures the palmprint features, the image should be preprocessed before feature extraction. As the image is inside the semienclosed box, it is free from ambient light interference. The user only needs to lay his hand open naturally for photographing. The experimental results showed that the accuracy rate of the extension recognition algorithm in this study is 91%, which is higher than traditional neural algorithm. This new approach merits more attention, because the low-cost device deserves serious consideration as a tool in palmar recognition problems. We hope this paper will lead to further investigation for industrial applications.

References

  1. R. W. Ives, Y. Du, D. M. Etter, and T. B. Welch, “A multidisciplinary approach to biometrics,” IEEE Transactions on Education, vol. 48, no. 3, pp. 462–471, 2005. View at Publisher · View at Google Scholar · View at Scopus
  2. S. Z. Li, R. F. Chu, S. C. Liao, and L. Zhang, “Illumination invariant face recognition using near-infrared images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 4, pp. 627–639, 2007. View at Publisher · View at Google Scholar · View at Scopus
  3. C. T. Chou, S. W. Shih, W. S. Chen, V. W. Cheng, and D. Y. Chen, “Non-orthogonal view iris recognition system,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 20, no. 3, pp. 417–430, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. D. Zhang, F. Liu, Q. Zhao, G. Lu, and N. Luo, “Selecting a reference high resolution for fingerprint recognition using minutiae and pores,” IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 3, pp. 863–871, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. D. Zhang, G. Lu, W. Li, L. Zhang, and N. Luo, “Palmprint recognition using 3-D information,” IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, vol. 39, no. 5, pp. 505–519, 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. G. Zheng, C. J. Wang, and T. E. Boult, “Application of projective invariants in hand geometry biometrics,” IEEE Transactions on Information Forensics and Security, vol. 2, no. 4, pp. 758–768, 2007. View at Publisher · View at Google Scholar · View at Scopus
  7. J. Doi and M. Yamanaka, “Discrete finger and palmar feature extraction for personal authentication,” IEEE Transactions on Instrumentation and Measurement, vol. 54, no. 6, pp. 2213–2219, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on System, Man and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at Scopus
  9. Y. Wang, Q. Ruan, and X. Pan, “An improved square-based palmprint segmentation method,” in Proceedings of International Symposium on Intelligent Signal Processing and Communications Systems (ISPACS '07), pp. 316–319, December 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. M. H. Wang, K. H. Chao, G. J. Huang, and H. H. Tsai, “Application of extension theory to fault diagnosis of automotive engine,” ICIC Express Letters, vol. 5, no. 4 B, pp. 1293–1299, 2011. View at Scopus
  11. M. H. Wang, K. H. Chao, W. T. Sung, and G. J. Huang, “Using ENN-1 for fault recognition of automotive engine,” Expert Systems with Applications, vol. 37, no. 4, pp. 2943–2947, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. M. H. Wang, Y. K. Chung, and W. T. Sung, “Emerging intelligent computing technology and applications,” in The Fault Diagnosis of Analog Circuits Based on Extension Theory, vol. 5754 of ICCC Lecture Notes in Computer Science, pp. 735–744, Springer, Berlin, Germany, 2009.