About this Journal Submit a Manuscript Table of Contents
International Journal of Vehicular Technology
Volume 2013 (2013), Article ID 901524, 13 pages
http://dx.doi.org/10.1155/2013/901524
Research Article

Palm Personal Identification for Vehicular Security with a Mobile Device

1Department of Information and Communication Engineering, Chaoyang University of Technology, Taichung 41349, Taiwan
2Department of Computer Science and Technology, Harbin Institute of Technology, Shenzhen Graduate School, Shenzhen 518055, China

Received 20 October 2012; Revised 15 February 2013; Accepted 1 March 2013

Academic Editor: Cheng-Min Lin

Copyright © 2013 Chih-Yu Hsu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Security certification is drawing more and more attention in recent years; the biometric technology is used in a variety of different areas of security certification. In this paper, we propose a palm image recognition method to identify an individual for vehicular application; it uses palm image as a key for detecting the car owner. We used mobile phone cameras to take palm images and performed a new identification approach by using feature regularization of palm contour. After identification is confirmed, the phone uses Bluetooth/WiFi to connect the car to unlock it. In our evaluation, the experiments show that our approach is effective and feasible.

1. Introduction

The car as a daily transport already has a history of 120 years; with the development of science and technology, the car has also experienced rapid changes which significantly improved human life. Automotive systems include vehicle monitoring systems, car GPS system, 3G car system, and in-car information systems. With the rapid development of automobile electronic technology, automotive intelligent technology is gradually applied. Automotive intelligent technology is making handling a car simpler, and driving safety is getting better and better. However, so far, the on-board system focused on the improvement of driving and riding experience, and there is no significant improvement in the antitheft. In fact, it has been one of the problems bothering people; it will be a major issue to ordinary people once a car is stolen. So we focused on the enhancement of a civilian vehicular lock by providing a new biometric technology.

Throughout our journey of life, the recognition of personal identity is inseparable. However, the wide varieties of identifications have led to an inconvenience in real life, and there are often cases of forging others’ identity documents. In order to change this situation, to protect the property of people, we want to use the unique features of individuals as authentication approach, so we use a biometric technology to turn the biology characteristics into a secure password.

Biometric technology is more and more concerned in a secure authentication in recent years [1]; it is used in a variety of different areas of authentication. In biometric technology, we must obtain a biometric as personal feature that does not change easily over time and that most people make use of. This paper hopes to identify a personal characteristic through the palm image recognition to increase the security of automotive protection as a supplement of the car key.

When a mobile phone is connected to the car via Bluetooth or WiFi with a common secure encryption and authentication, its security solution can be realized by today’s technology. Therefore, if a palm image is validated on the phone, it would be feasible to send a secure code to open a door of the vehicle. Moreover, since the palm image is taken with the mobile phone, compared to on-car equipment, it would be easier to protect against raining and snowing conditions. Of course, wore gloves or hand injuries are still obstacles for our method, however, many biometric technologies, such as fingerprint and face recognition, also failed to solve these problems, but they are still popular in many applications.

There are two other issues that should be mentioned. The first issue is the security of palm database; a common encryption technology can enhance the security of database to resolve this concern. The second issue is the processing speed; nowadays, the current high-end mobile phone is faster than the personal computer several years ago. If it did not process a large amount of palm images, it would not be a problem for the performance issue.

Finally, the current vehicle electronic lock has no personal biometric information, and so far we did not find a similar technology using a mobile phone to take a palm image to open the car; thus our idea is novel for such as application.

The rest of this paper is divided into 6 sections. Section 2 introduces the related research on palm image identification so far. The overall architecture of palm image identification system is illustrated in Section 3. In Section 4, we detailed the method of the extraction of palm image, the preprocessing of the palm image, and the way we obtain eigenvalue. The results and discussion about the experiment procedure are explained in Section 5, and finally, the conclusion is stated in Section 6.

2. Related Work and Background

Biometrics takes advantage of the unique characteristic of the human to distinguish a person from other people, and applies it to various fields. In the recent biometric researches, there is a diversity of methods in biometrics and their major consideration lies on convenience and safety and takes them as the main research direction of biometric authentication [29].

Palm geometry recognition is a feasible and easy-to-use biometric technology today. As the name suggests, this technology uses palm geometry or characteristics such as width and length of the fingers to do the identification [10, 11]; however, in order to achieve consistency when capturing images, most of the palm image recognition machines will use fixed small cylinder mainly to make palm fixedly placed, but this way becomes a disadvantage of palm geometry recognition, because not every palm is able to meet the size of the cylinder fixed range [12].

In 1999, Jain and Duta [13] have proposed using palm geometry for identification; their method is to fix small cylinder in the middle of the fingers which causes image distortion, so palm contour image correction needs to be done before feature extraction, and finally, calculates the distance of the shape to do identification.

In 2000, Reillo [14] used low-resolution digital cameras to capture colored palm images and transform palm images into grayscale images. Then they retrieve the four fingers and measure the finger thickness, width and distance, the angle between the midpoints and fingertips as the value of the image feature, and then by using this method palm classification can be done.

In 2008, Wang [15] raised a palm identification method based on morphological identification, defining the palm area in accordance with each person's middle finger length and cut palm shape into irregular cell block and taking the gray value of each cell block as the eigenvalue. This paper proposed a two-stage recognition module, and palm shape and palm prints were used as a characteristic value; first it uses the amount of area with palm ribbon to do rough identification, then it uses the average grayscale value of a block or variation vector as signature for identification.

Summing up the above study we can find that almost all palm geometry recognition technologies require fixed palm position—but if a small cylinder is fixed between fingers, it is likely to cause skin depressions, making captured palm contour errors—or require to define the palm range of the hand to do a feature extraction. These restrictions have a great effect on the development of palm geometry recognition technology; the requirement of fixed palm position forbids using identity recognition technology in mobile communication equipment, because users cannot carry an instrument to retrieve a biometric at any time. As the use of palm shape as biometric is the most feasible technology, we keep improving palm geometry recognition technology.

We know early biotechnology as a method of combating crime, the use of individuals’ unique characteristics to find out the crime suspects, which later became a safety certification. The degree of difficulty of obtaining and replicating of palm shape is higher than fingerprint and face shape; as shown in Figure 1, the fingerprint is easy to leave marks on biometric instruments, which could be copied for fingerprint; however palm shape is not so simple to obtain, so hand geometry recognition is still an issue public concern.

901524.fig.001
Figure 1: Comparison of biometric authentication systems (data source: Hitachi).

The biometric system is generally divided into the following two ways.

(1) Verification Mode. The identification mode is most often used to prevent the one’s identity from being multipersonally used or from theft; the system will do one-to-one biometric verification from the database, and validation may need to combine with the input user name or password to ensure the user's identity and the effect of two-factor verification.

(2) Identification Mode. Users do not have to enter a name or password; the system does the one-to-many matching from the database, examining whether the identification information is already in the database. But on the contrary, recognition patterns need stronger computing power to be able to correctly perform the function of a one-to-many.

3. Proposed System Architecture

This paper is mainly based on capturing images of the palm to do computing for the feature of identity recognition; we hope to take advantage of our approach to apply the use of palm image recognition to mobile device and car security, but to apply to mobile device we must get rid of the restrictions of large-scale identification machine. Modern communication equipment is thin and light, but the function is more powerful than in the past, and we hope that the use of the above palm recognition on mobile device can do simple computation without losing its function; this paper proposes a method which is easy to calculate as well as recognizes personal identity.

There are many ways to retrieve palm images such as cameras and mobile phones. In the design of the proposal, we use the mobile device as a tool to obtain palm image. The overall system architecture, including preprocessing the captured palm image, the use of HSV (Hue, Saturation, Value) color model conversions for the S channel image, and then the use of Otsu's algorithm, automatically selects a threshold value and transforms grayscale images into binary images, after the use of mathematical morphology to remove noise and the use of LOG (Laplacian of the Gaussian) edge detection to obtain the palm edge; palms characteristic area is calculated with feature points and stored for comparison. System architecture diagram is shown in Figure 2.

901524.fig.002
Figure 2: Identity authentication system architecture diagram.

In this authentication system architecture, all data must be established in the database. In the absence of the establishment of the information in the database, we will not be able to carry out the identification which is the same as when the authentication fails. We extract features from all colored palm images which are taken by mobile phones first, then image processing obtains feature points and calculates the characteristic area, and the values of these features areas are all stored in the database. If a new palm image is inputted into this system, the value of the characteristic area is compared with the database.

4. Proposed Algorithm

The palm image processing is a very important aspect, not only whether the palm image capture quality is good or bad, but also it will affect the experimental results. The quality of image processing and uphold high quality of palm retrieve as well as a complete image processing. The quality of the results of the experiment will be a better presentation as well.

In this section, we will clearly describe each step of the palm image processing process.

(a) The Image Preprocessing. In image preprocessing, the most important step should be to the transform image into binary image which reduces the color complexity, as the result of the transformation will usually affect the results of the entire image processing and subsequent image processing such as edge detection and segmentation; thus image binarization is a very important step. The mostly used method is Otsu’s algorithm [16] which can automatically select the best binarization threshold, and the image is converted to a binary image; we will be able to complete the rest of the image processing part.

In the preprocessing of palm image in this research, we transform the palm image into HSV (Hue, Saturation, Value) color model and obtain the S channel image and then; using Otsu's algorithm, transform grayscale images into binary images, after the use of mathematical morphology to remove noise and the use of LOG (Laplacian of the Gaussian) edge detection to obtain the palm edge. The image processing flow is shown in Figure 3.

901524.fig.003
Figure 3: Image processing flow chart.
4.1. Color Model

Color can be described in many different ways [17, 18]. The color model is called color space, and the color space is capable of rendering all color sets; it is based on some standard values to describe color; basically, the color model is composed of three-dimensional coordinates and a sub-space; in this subspace, each color is represented by a point. In image processing, common color models are RGB (Red, Green, Blue) color model, HSV (Hue, Saturation, Value) color model, and YCbCr to represent the colors.

RGB color model is often used to represent colors; it uses Cartesian coordinates: the color range is defined as a cube, as shown in Figure 4, and the color space is a three-dimensional linear space, so any color can be expressed using the space vector. But color saturation and brightness will not be completely described.

901524.fig.004
Figure 4: RGB color model.

The HSV color model is able not only to describe the color hue, but also to describe prime color saturation and brightness as shown in Figure 5, it can not only make a more complete description of colors but also complete what cannot be described by RGB. In human color cognitive style, HSV color model has three basic characteristics to describe the color. The first is the hue, and hue is the color of the basic properties, such as red and green. The second point is the saturation which is also called Chroma, and it refers to the purity of the color and the degree of brightness. The third point is the Value, and it represents the brightness that is the relative lightness or darkness of the color. The HSV color model must be converted via the RGB color model corresponding to formula (1); compared to the other color model, conversion by the RGB color is converted to HSV color model is relatively simple. Assume that (, , ) is the colors red, green, and blue, respectively, and their value is a real number between 0 and 1, if max is equal to the maximum value of , , and and min is equal to the minimum of these values:

901524.fig.005
Figure 5: HSV color model.

In this paper, we mainly use the mobile phone to obtain palm images; the captured palm images are based on the RGB color model, the converting palm image through the corresponding formula to the HSV color model image not only can improve the color, but also allows more accuracy for subsequent analysis and processing.

4.2. Mathematical Morphology for Palm Images

Mathematical morphology is a method to reshape object [19] which can be used to remove the noise. Alternating the use of expansion and erosion in mathematical morphology is the most common method to eliminate noise of binary image which is our choice in this paper. As shown in Figure 6, the expansion is expanding out from the object on the boundary of objects, shown in Figure 6(a) is the original element, and Figure 6(b) is the structural elements, the structural elements scan along the original element, and the structural element out of the original elements will expand them as shown in Figure 6(c). Conversely erosion is an inward contraction from the boundary of the object, as shown in Figure 7. The results differ from the times and order of expansion and erosion used, usually in order to prevent objects from narrowing with the erosion of image. Erosion times must be the same as inflated times, and the use of expansion and erosion can clean unnecessary images parts.

fig6
Figure 6: Expansion scheme diagram.
fig7
Figure 7: Erosion scheme diagram.

We hope that the palm image processing is capable of capturing the most important part of the whole palm. However, during the shooting, the images may appear signal does not exist in the entity; we call it noise, and it may affect the results of feature extraction, so we use the expansion and erosion to clear noise in the image in preprocessing, such as Figure 8; we will remove noise which is large and will affect the part of the palm image.

901524.fig.008
Figure 8: Result of expansion and erosion of palm images.
4.3. Edge Detection

The image edge detection is an important part of the image feature extraction; with improved edge detection, feature extraction becomes easier. In the image, we must use edge detection to segment objects from background. We usually use the gradient of the image to do the calculation in binary image. Gradient detection methods are broadly divided into two categories: the first category is to use the first derivative function such as Sobel and Canny, and the second is to use the filter function, for example, Laplacian and LOG (Laplacian of the Gaussian) [20, 21]. The first function is to set a threshold value; when the calculated gradient value is greater than the threshold, we set it as boundary. Using the first function to do the edge detection for stepped boundary does not produce any problem, but for the boundary of the ramp-like, usually there is more than one value that is greater than the threshold, so the accuracy of the obtained boundary line may not be so high. The second function is very sensitive to noise, so usually before obtaining the second derivative functions we use a low-pass filter to remove the noise to make the image smoother. LOG is an edge detection combination of a Gaussian low-pass filter and Laplacian, and the detection is completed by the following two steps.(i) Do the convolution of original image with a Gaussian low-pass filter: The Gaussian low-pass filter is , and the original image is .(ii)Do computing on smoothed image using Laplacian

The entire human palm shaped a more inclined wavy ramp, so we use the LOG function of the second method to do edge detection. We can see from Figure 9 that the edge detected from the palm is very sound; thus we use it to extract the feature in our algorithm.

901524.fig.009
Figure 9: LOG edge detection in the palm image.
4.4. Feature Point and Area

We use the palm contour extracted by LOG edge detection method to do the search on the whole picture and select all the characteristics points we are concerned about, as shown in Figure 10(a), and the feature points are defined at fingertips and finger valley. These feature points can form triangles, and use the distance formula to calculate side length of the triangles. We use the Heron formula [22] to calculate the area of a triangle as a characteristic area. The Heron formula can be used as method of calculating the area of a triangle which needs the triangle side lengths only to calculate the area of the triangle. This formula is as follows: where , and , , and are the lengths of the three sides of a triangle.

fig10
Figure 10: LOG palm feature points.

Figure 10(b) is a palm characteristic area. We take advantage of the area of the three triangles as an experimental of the characteristics area which plays a very important role in our research; the effective characteristics area allows identifying a person.

4.5. Projection Geometry

Projection geometry [23] has many characteristics, and what we are concerned about is the projection invariance. When we retrieve palm images if palms tilt an angle, it may lead to the deviation of the characteristic size that we calculated, thereby affecting identify results; when this situation occurs, projection geometry must be used. Projection geometry invariance involves that, no matter whether there is the inclination angle when the object projects, the area ratio will be the same. In the experiment, the characteristic area of the palm, a common area, will be divided to maintain the same proportion, so that we will be able to ensure that the results are not affected by the angular deviation of retrieving palm images.

5. Results and Discussion

In the experiment, we use the self-made image from an android mobile phone, as shown in Figure 11. We looked for about thirty men and women and then shot their hand images as experimental material. The palms of the subjects were divided into three heights (30 cm to 35 cm, 25 cm to 30 cm, and 20 cm to 25 cm), then we shot a palm image in each case. The shooting distance cannot be the same every time, so we made most people at an inertial distance as the shooting distance of the palm. This not only be able to solves the problem that we need to fix the shooting distance but also can be used in more aspects.

fig11
Figure 11: The palm image schematic diagram.

Because each palm image shooting location and environment is not the same which may affect the identification result, we use grey colored paper as the background for the experiment and delete serious fuzzy images in advance. When selecting the feature points and feature surface, we first do selection manually, if we can use feature surface to identify a person’s identity. We will further study the feature points and feature surface which are selected automatically. We divided the experiment into three parts: from the two-dimensional, three-dimensional, and four-dimensional palm eigenvalues to do experiment analysis.

In the first part, we use the palm’s two-dimensional characteristic values as the experimental focus. As show in Figure 12(a), we use the middle finger’s fingertip (P1), the ring finger (P2), and finger valleys (P3, P4 and P5) formed by index finger, middle finger, ring finger, and little finger as feature points. We use the triangle formed by P1, P2, and P3 as a middle finger area and used the triangle formed by P2, P4, and P5 as a ring finger area. P3, P4, and P5 can constitute another triangle. As shown in Figure 12(b), first, we calculate the area of the three triangles with Heron’s formula, then the middle finger area and the ring finger area are divided by the area of the triangle which is formed by P3, P4, and P5. We use the first value as the -coordinate in axis and the second value as the -coordinate in axis. We calculate the ratio value of all the palm images which are shot with three distances; thus every participant can have three coordinates in axis and then seek the center of gravity of these three coordinates as the database samples. The test materials cannot be completely identical since the shooting height may vary. We hope that the values of three shooting distances are very similar for each person but not identical with the others, so can can easily identify whether they are for the same person; that is the reason why we intended to seek the center of gravity. When we get new data, we compare it with the center of gravity in the database; it belongs to the person who has the nearest center of gravity.

fig12
Figure 12: Two vector feature points selection.

As shown in Figure 13, we use different colors to distinguish between four different people. The circles represent the value of different shooting distances, and “*” represents the center of gravity. Everyone does experiment three times, and the value of the experiment is used as a coordinate and then seeks the center of gravity of the three coordinates. From the picture, we know that the red, green, and blue signs belong to different people since the gap between their center of gravity points is very large. Though we know that the water blue and blue signs belong to different people, since their center of gravity is so close, if there is new data added, it may be located in the middle of the water blue and blue’s center of gravity point; this time it is difficult to distinguish which center of gravity should this new data be vested in. If we use such two-dimensional characteristic values to make judgment, we may not be able to identify a person's identity correctly. Therefore, we will change the eigenvalues and eigenvalue, increased from two dimensional to three dimensional; the second part is to observe whether the three-dimensional method is able to match the personal identity.

901524.fig.0013
Figure 13: The two-dimensional characteristic values and the center of gravity points.

In the second part, we use the three-dimensional features value of the palm as the experimental focus. In the two-dimensional feature values, results are not able to distinguish whether it is the same person; the area that consists of fingertip and valley may be affected by the size of the fingers opening and closing thereby affecting the entire characteristic area, resulting in a false judgment, so we will continue to use gravity point as the database and do the comparison between the new information and the gravity point in the database; We hope that the three-dimensional characteristics value can distinguish between different persons’ palm.

We take advantage of 4 characteristic points of finger valley as is shown in Figure 14 which can form 4 triangles. Using P3 as the public point, we have 3 triangles: triangle P3, P4, and P5, triangle P3, P4, and P7, and triangle P3, P5, and P7 as is shown in Figure 15. Now we use A to represent triangle P3, P4, and P5, use B to represent triangle P3, P4, and P7, and use C to represent triangle P3, P5, and P7. Having retrieved A, B, and C, we calculate their area. In order to solve the problem of palm elevation angle, we divide the area of A, B, and C by the area of A, respectively. The first result is the coordinate of -axis of -axis, the second is the coordinate of -axis of -axis, and the third is the coordinate of -axis of -axis. We calculate the characteristic value of the palm image of three distances, as each person provides three coordinates of -axis, and calculate the gravity center of the three coordinates as the database sample as is shown in Figure 16.

901524.fig.0014
Figure 14: 3D feature points.
fig15
Figure 15: Three-dimension characteristics area.
901524.fig.0016
Figure 16: Three-dimensional measurement values with the center of gravity point.

Figure 16 is a three-dimensional graph, each punctuate represents one space point, and * represents a center of gravity. Although these points seem to gather, this is a three-dimensional space, so in fact each point is not so close, each center of gravity does not overlap with the other and is separated by a certain amount of the distance. It is feasible to use the gravity point of the three-dimensional characteristics as characteristic values to do the identification, but in the experiment, we found that to calculate the gravity center point is no longer so reliable. There may be two different people whose gravity points have only a slight gap or even are equal; in such a case it is not deemed to be a reliable and feasible characteristic value, but we found that increasing characteristic dimension from being two dimensional to three dimensional is better. The third part increased the feature dimension to being four dimensional, hoping that increased dimensions allow better identification, but we no longer use the center of gravity point as a database, but will build a database of all files saved and compare new data with all collections in the database, taking the data with minimum distance as the same person.

In the third part, we take advantage of the four-dimensional characteristics of the palm value as experimental focus. Although the use of a valley area to extract the three-dimensional characteristics compared to the two-dimensional feature seems to be able to distinguish the different palms, with enough amount of information you will find three-dimensional features not good enough as the calculated center of gravity points may be very close or even equal, so it easily leads to error, so in the third part, we increase the eigenvalue to being four dimensional, and it is no longer the center of gravity point, as we will take the point in the collection nearest to the under test data and attribute the information to the same class, that is, to calculate the distance between the under test data and the database. When we calculated the nearest distance, we attribute it to the same personal palm image.

As we use four feature points of valley, as shown in Figure 17, the four points can be constituted of four triangles, respectively, the triangle P3, P4, and P5, the triangle P3 of P4 and P7, the triangle P3 of P5 and P7, and triangle P4, P5 and P7. In Figure 17, we calculated the areas of, the four triangle areas of our four dimensional characteristics, respectively; these four characteristic areas are taken as , , , and . In addition to calculate these four areas and dividing each by a common area so that the ratio of the area will not change because of the palm elevation angle problem, we store all the calculated eigenvalues. If there is new data, we calculate the distance between the new information and the database for comparison.

fig17
Figure 17: The four-dimensional characteristics area.

It can be seen from Table 1 that we will have all eigenvalues stored in the database. Everyone has three four-dimensional data which are palm images captured in different heights respectively while the new piece of data is the palm image of the 30th test subject shot without limitation for test, we have the , , , and divided by , so the value of becomes 1, and the other numerics are rounded to the second decimal place. When the new data appears, the new data will do the distance calculation with all of the four-dimensional data. We can see that the minimum distance falls in the 30th subject, showing that our approach is feasible. In the experiment, we also had the data divided by the , , and and calculated the minimum distance drop in the 30th subject, so regardless of which value it is divided by, it does not affect the final results, but it is possible to prevent the problem of palm angle deviation when retrieving the palm image.

tab1
Table 1: Distance when divided by ’s area.

In order to prove the accuracy of this method, take two images of each individual’s four-dimensional data as the basis. Among the basis, one palm image is shot by the distance of 30 cm to 35 cm, the other is taken by 25 cm~30 cm, while the rest of the document is shot by 20 cm~25 cm which is the test data. In Table 2, because the experimental data is relatively large and because of the limitations of the forum, we cannot present one by one, but we use five data as instructions. It can be tested that the under test data matches with the distance calculated by database data; when the data matches, it is represented by bold, otherwise in italic. We can see that the fourth data is judged to be of others, representing the recognition error; in the 30 overall files, there are 6 identification errors, and the correct rate is 80%. In accordance with the test method described above, we recognize the personal identity of the correct rate based on different data, as shown in Table 3.

tab2
Table 2: 30 cm~35 cm and 25 cm~30 cm are the partial data of the database and the test data.
tab3
Table 3: Accuracy of different basis.

Accuracy of identification is unable to reach 100%, probably because of the impact of the environment during the process of shooting, or that our eigenvalue election is not good enough, but the experimental data confirmed that our eigenvalue is effective for identity recognition.

Biometrics is an assistant to a secure access and coworks with RFID, and passwords are the common practice to resolve some inaccuracy issues. Thus there is no high demand to require very high accuracy in the palm biometric security.

6. Conclusion

In this paper, we proposed a palm image recognition method to identify individuals. We take four triangular areas formed by finger valley's four points as the feature. Our proposed method has the advantage of using palm image for vehicular security with mobile devices, and it is not necessary to fix the palm to a specific location. The experimental results show that we can achieve over 80% accuracy and prove that this method is feasible and can be used in the car access verification.

In future work, we hope to be able to protect the personal interests and safety with the simplest and most rapid method. For four-dimensional eigenvalue method proposed in this paper, we must first overcome the problems brought by the light sources and complex background when shooting a palm image, then we need to take advantage of more test data to verify that this method is not only feasible in a small number of databases. This method can be applied to on personal security and home security as well.

References

  1. D. Kumar and Y. Ryu, “A brief introduction of biometrics and fingerprint payment technology,” International Journal of Advanced Science and Technology, vol. 4, pp. 25–38, 2009.
  2. A. Kong, D. Zhang, and M. Kamel, “Palmprint identification using feature-level fusion,” Pattern Recognition, vol. 39, no. 3, pp. 478–487, 2006. View at Publisher · View at Google Scholar · View at Scopus
  3. A. Kumar, C. M. Wong, C. Shen, and K. Jain, “Personal verification using palmprint and hand geometry biometric,” in Proceedings of the 4th International Conference on Audio- and Video-Based Biometric Person Authentication (AVBPA '03), pp. 668–678, 2003.
  4. A. Kumar and A. Passi, “Comparison and combination of iris matchers for reliable personal identification,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW '08), pp. 1–7, June 2008. View at Publisher · View at Google Scholar · View at Scopus
  5. Y. Zhou and A. Kumar, “Personal identification from iris images using localized Radon transform,” in Proceedings of the 20th International Conference on Pattern Recognition (ICPR '10), pp. 2840–2843, August 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Golfarelli, D. Maio, and D. Maltoni, “On the error-reject trade-off in biometrie verification systems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 786–796, 1997. View at Scopus
  7. A. Kumar and D. Zhang, “Combining fingerprint, palmprint and hand-shape for user authentication,” in Proceedings of the 18th International Conference on Pattern Recognition (ICPR '06), pp. 549–552, August 2006. View at Publisher · View at Google Scholar · View at Scopus
  8. A. Kumar and D. Zhang, “Personal recognition using hand shape and texture,” IEEE Transactions on Image Processing, vol. 15, no. 8, pp. 2454–2461, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. Y. Cao, “Biometric Selection Record,” RUN!PC, 2009(188), pp. 44–45.
  10. W. Xiong, K. A. Toh, W. Y. Yau, and X. Jiang, “Model-guided deformable hand shape recognition without positioning aids,” Pattern Recognition, vol. 38, no. 10, pp. 1651–1664, 2005. View at Publisher · View at Google Scholar · View at Scopus
  11. C. C. Han, H. L. Cheng, C. L. Lin, and K. C. Fan, “Personal authentication using palm-print features,” Pattern Recognition, vol. 36, no. 2, pp. 371–381, 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzalez-Marcos, “Biometric identification through hand geometry measurements,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1168–1171, 2000. View at Publisher · View at Google Scholar · View at Scopus
  13. A. K. Jain and N. Duta, “Deformable matching of hand shapes for user verification,” in Proceedings of IEEE International Conference on Image Processing (ICIP '99), pp. 857–861, October 1999. View at Scopus
  14. R. S. Reillo, “Hand gemoetry patten recognition through gaussian mixture modelling,” in Proceedings of the 15th International Conference on Pattern Recognition, vol. 2, pp. 937–940, 2000. View at Publisher · View at Google Scholar
  15. W. Wang, Palm identification based on morphology [M.S. thesis], National Chi Nan University, 2008.
  16. S. Liu, D. He, and X. Liang, “An improved hybrid model for automatic salient region detection,” IEEE Signal Processing Letters, vol. 109, no. 4, pp. 207–210, 2012. View at Publisher · View at Google Scholar
  17. C. H. Su, H. S. Chiu, and T. M. Hsieh, “An efficient image retrieval based on HSV color space,” in Proceedings of the International Conference on Electrical and Control Engineering (ICECE '11), pp. 5746–5749, September 2011.
  18. S. Miu, Digital Image Processing, Princeton International Co. Ltd, 2006.
  19. L. Yucheng and L. Yubin, “An algorithm of image segmentation based on fuzzy mathematical morphology,” in Proceedings of the International Forum on Information Technology and Applications (IFITA '09), vol. 2, pp. 517–520, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. Edge Detection, http://processing.org/learning/topics/edgedetection.html.
  21. J. Fauvel and J. V. Maanen, History in Mathematics Educatio, Kluwer Academic Publishers, 2000.
  22. A. McAndrew, Introduction to Digital Image Processing with Matlab, Thomson/Course Technology, 2004.
  23. Projective Morphology, http://www.zwbk.org/zh-tw/Lemma_Show/68316.aspx.