Advancements in Mathematical Methods for Pattern Recognition and its ApplicationsView this Special Issue
Research Article | Open Access
Chenlei Lv, Junli Zhao, "3D Face Recognition based on Local Conformal Parameterization and Iso-Geodesic Stripes Analysis", Mathematical Problems in Engineering, vol. 2018, Article ID 4707954, 10 pages, 2018. https://doi.org/10.1155/2018/4707954
3D Face Recognition based on Local Conformal Parameterization and Iso-Geodesic Stripes Analysis
3D face recognition is an important topic in the field of pattern recognition and computer graphic. We propose a novel approach for 3D face recognition using local conformal parameterization and iso-geodesic stripes. In our framework, the 3D facial surface is considered as a Riemannian 2-manifold. The surface is mapped into the 2D circle parameter domain using local conformal parameterization. In the parameter domain, the geometric features are extracted from the iso-geodesic stripes. Combining the relative position measure, Chain 2D Weighted Walkthroughs (C2DWW), the 3D face matching results can be obtained. The geometric features from iso-geodesic stripes in parameter domain are robust in terms of head poses, facial expressions, and some occlusions. In the experiments, our method achieves a high recognition accuracy of 3D facial data from the Texas3D and Bosphorus3D face database.
Face recognition has been investigated for many years. The related applications of face recognition includes biometric analysis, security system, and information management. Traditional face recognition methods construct the recognition framework based on 2D facial images. The facial data in image are convenient to be achieved and the requirements for data acquisition devices are relatively low. However, there are many factors which influence the recognition rate of the methods such as illuminations, cosmetics, blur, facial expressions, different kind of occlusions (glasses, hair, and hand), and head poses. Such influence factors should be processed in a robust framework which increase the computation cost and complexity of algorithm. Some works attempt to research 3D facial data for face recognition.
The new technologies of 3D data scanning are developing fast in recent years. Comparing the traditional 3D scanning methods, the new scanning methods does not require complexity devices with strict conditions. Using a mobile phone with an additional scanning camera or function of structured light acquisition, the geometric information of a 3D face can be obtained. Based on the new scanning technologies, the 3D face recognition methods are proposed which extract the facial features from 3D geometric information. The advantages of the methods are obviously as follows: robust to texture information of face; providing a possible solution to remove the impact of head poses; providing more geometric information to remove the impacts of facial expressions and different kinds of occlusions.
Based on the 3D facial data, we propose a novel face recognition method which map the 3D facial surface into the 2D parameter domain and extract the geometric features from the iso-geodesic stripes. In 2D parameter domain, the geometric features of 3D facial triangular meshes remained relative and the head pose influences to 3D face are degenerated into 2D scene. Combining the alignment of iso-geodesic stripes and expressions robustness measurement of geometric features, the face recognition result can be achieved with a simple way from 2D parameter domain. More concretely, our face recognition framework includes three steps: detecting the facial landmarks in 3D facial data and using the landmarks to extract the iso-geodesic stripes, which can be regarded as a preprocess; mapping the 3D facial surface into the 2D parameter domain by local conformal parameterization and achieve a facial representation; computing the geometric features from the iso-geodesic stripes in 2D parameter domain. The features’ measure, we called Chain 2D Weighted Walkthroughs (C2DWW), is used to represent the geometric information of a 3D face and construct the face measurement function. The pipeline of our method is shown in Figure 1. In summary, our contributions are as follows:(1)We propose a pipeline of 3D face recognition which extract the facial features and compare different facial data in an automatically system.(2)We propose a measurement method called Chain 2D Weighted Walkthroughs (C2DWW) to compare different facial data, which is robust to different facial expressions and head poses.(3)We propose 2D facial representation using local conformal parameterization. The representation can be regarded as the facial geometric features which remove the influence of different head poses.
The remainder of our paper is organized as follows. In Section 2, we introduce some related works. In Section 3, we introduce the pre-process of our method. In Section 4, we discuss the 2D facial area distortion representation construction. In Section 5, we illustrate construction of C2DWW. The public facial databases, Texas 3D and Bosphorus3D, were used in our experiment and the evaluation of face recognition by different methods are discussed in Section 6.
2. Previous Work
For face recognition framework, the important problems are removing the influence of different factors such as head poses, facial expressions, cosmetics, illuminations, and occlusions. Extracting the discrimination and robustness facial features from the input facial data which decide the facial recognition performance. According to different kinds of facial features, the face recognition methods can be divided into different classes: image features based, facial surface analysis based, local shape descriptors based, and partial face based.
The image features based method was constructing the face recognition framework based on facial images. The challenge task was using the single training sample per person in face recognition process. The facial features included discriminative features based on image patches , sparse representation features to build variation dictionary , expression subspace representation , sparse representation based on illumination transfer , local generic representation , and sparse discriminative multimanifold embedding features . Such methods attempted to reconstruct the highly discrimination facial features from single facial image to cover the different influence factors. In order to achieve the high face recognition rate, the feature extraction algorithm was complicated. Without global geometric features, the methods were limited by some facial images with extremely head poses, blur in image, and occlusions.
Based on the 3D facial surface, the global geometric features were used to construct face recognition framework. The classical thought was extracting facial features from the surface directly. Some methods constructed the parameter representation from the facial surface in Euclidean space such as canonical form  and iso-geodesic stripes . Based on non-Euclidean space, some researchers propose different frameworks to represent the facial features such as Ricci flow mapping , elastic measure based on radial curves [10, 11], and spherical harmonic features match . The methods extracted the facial feature from facial surface to construct the global face match and achieved the accurate recognition result. However, the methods required the accurate facial surface data from the 3D face scan which limited the field of applications. The quality of raw face scan was affected by scanning devices, distance of the object and different of occlusions (hair, glasses, or hand) which increased the difficult of triangular mesh reconstruction.
Local shape descriptors based methods extracted discrete facial features from facial data to construct the face recognition framework. The discrete facial features included point cloud set , 3D key points based [14–17], and local surface analysis [18–20]. The discrete facial features such as surface points and local geometric descriptors did not require complex preprocess for face cropping and high quality triangular meshes of 3D face. Some images based learning frameworks were convenient to be employed for such discrete facial features [18, 21]. However, such methods were limited by the local shape features representation. To achieve the global facial data analysis result, the learning frameworks of local shape features required large cost of computation and complex structure to remove the influence of facial expressions, different kinds of occlusions, and head poses.
Partial face based methods extracted facial features from partial face regions to construct face recognition framework. The methods selected local facial surface around eyes [22, 23] and nose [24, 25] which was not affected by different facial expressions. Such methods provided a simple solution to remove the influence factors in face recognition which did not require the complexity algorithm for facial feature extraction. For accurate face matching application, such method did not provide the global facial data analysis. In our method, we propose an improvement scheme for partial face based method. The iso geodesic strips around the nasal region are extracted and mapped into the 2D parameter domain. The influence of facial expressions and head poses can be removed in 2D parameter domain.
3. Preprocess for 3D Face
To construct 3D face recognition framework, we should extract the facial features from the 3D facial data at first. In our framework, the preprocess of facial features extraction includes two steps: facial landmarks detection and iso-geodesic stripes extraction. Facial landmarks are needed to achieve geometric features and align different faces. In 3D face data, using shape analysis to detect special points is an enabling method discussed in recent research . We apply the idea to achieve the positions of the nasal tip and eyebrows tip. For iso-geodesic stripes extraction, we compute the geodesic of the two landmarks to define a confirm direction and a normalized distance. The geodesic path is extracted by . Combining the landmarks with the direction and the distance, we can achieve iso-geodesic stripes.
In , iso-geodesic stripes have been defined. In facial surface, the stripes represent different banding areas that satisfy specific geodesic distances from points in areas to the nasal tip. The banding areas are adjacent to each other and represent different regions of the face. In Function (1), the iso-geodesic stripe is represented. S is the face surface, and p is the point in S. Every point in S has a geodesic distance to the nasal tip pnose. Using different geodesic distances, we can achieve different banding areas cn.In Function (1), the centre of iso-geodesic stripe is nasal tip. The strip does not cover the global facial surface and the nasal region is not considered seriously. The geometric features from single stripe region are insatiable in face recognition. Following the geodesic path between nasal tip and eyebrows tip (nasal bridge curve), we extract different stripe regions by different centres. In Figure 2, we show the facial landmarks, nasal bridge curve, and different iso-geodesic stripe regions in face surface.
4. Construction of 2D Face Representation
The iso-geodesic stripes divide facial surface into different regions which are following the nasal bridge. Some methods [8, 24, 28] extract the facial features from the stripes directly to construct 3D face recognition framework. However, the influence of head poses are not removed from the stripes which reduce the accurate of facial features in the stripes. In our framework, we map the 3D facial surface into a 2D parameter domain using local conformal parameterization method. In 2D parameter domain, the facial data are convenient to be aligned by two facial landmarks. Based on the face reflection in 2D parameter domain, we present the 2D face representation which correct the distortion of triangular meshes from 3D facial surface to 2D facial reflection.
4.1. Local Conformal Parameterization
Local conformal parameterization is used to construct a parameter representation of 3D object, which can be applied in texture mapping and 3D object alignment. In our framework, the method is employed to map the 3D facial data into 2D parameter domain and the reflection preserves the intrinsic characteristics. The parameterization process can be transferred to optimize two kinds of transfer energy: Dirichlet Energy (2) and Chi Energy (3) .where and are the points in 3D facial surface, and are points in 2D mapping result, means the adjacent points of i, and the angles ,, and are shown in Figure 3. We achieve the local conformal parameterization result by computing the extreme value from the two energy functions in a linear equation group .
The 2D reflection of face in 2D parameter domain can be achieved from the local conformal parameterization result. The border of the reflection is the outermost geodesic circle of the iso-geodesic stripe and the centre is the point in nasal bridge. Each triangular mesh in 3D facial surface is mapped into the 2D parameter domain. The meshes construct a new circle in the domain. In Figure 4, we achieve the 2D reflection result with iso-geodesic stripes. The facial landmarks are mapped into the 2D parameter domain. Connecting the nasal tip and eyebrows tip to achieve a direction vector and rotate the 2D reflection to make the vector point to the top, the influence of head poses can be removed.
4.2. 2D Face Representation
Ideally, the triangular meshes from 3D face to 2D facial reflection remain the complete geometric information using local conformal parameterization. However, the Gauss curvature of each points in 3D facial data are not the same, which express the change of the first fundamental form and the area distortions are produced in triangular meshes. Therefore, the facial features cannot be extracted from the 2D facial reflection directly. To correct the area distortions of triangular meshes in 2D facial reflection, we propose a 2D face representation which remain the area scaling rate. The area scaling rate is the area ratio between triangular mesh in 3D face and 2D facial reflection, which can be regarded as the “density” in different triangular meshes. For instance, St is the 3D facial surface, 3D triangular mesh. Here , is a triangle and k is the number of triangles. is the 2D facial reflection of , and is the 2D triangle. is the area distortion representation, , . In Figure 5, we show the area distortions of the 2D face representation. The area distortion is more obvious and the colour is deeper.
5. Chain 2D Weighted Walkthroughs
Based on the 2D face representation, we propose a facial surface measure method, Chain 2D Weighted Walkthroughs (C2DWW). C2DWW is based on the 2DWW , which is used to measure the relative positions in different iso-geodesic stripes from different nasal regions. In Section 3, we have introduced that, based on different centres, the iso-geodesic stripes can be obtained. A stripe region is defined by a set of iso-geodesic stripes which have same centre. Following the nasal bridge, we change the centre and achieve different stripes regions. Such regions can be regarded as a chain that cover the whole nasal region. Comparing different stripe regions from two faces, the face matching result can be achieved. The C2DWW can be divided into two levels: stripes measure in the same stripe region and stripe regions measure.
5.1. Stripes Measure in Same Stripe Region
The iso-geodesic stripes in our framework are represented by discrete point set. The stripes measure method is based on the discrete points’ relative positions. The relative positions are represented by different codes in Figure 6 and (4). () and () are the points in the stripes. In each axis, there are three conditions of the relative positions. On the horizontal axis, the three conditions are right, left, and near. On the vertical axis, the three conditions are above, below, and adjacent. The threshold determines which conditions of the points are near. Using the appropriate threshold can reduce the influence of area change in conformal mapping. The weight code is proposed for certain relative positions measure. In (5), we show the computation of weight code. and are two iso-geodesic stripes from one face surface and A and B are indexes of the stripes. N() and N() means density point numbers of stripes and . The density point number is determined by discrete the area of the stripes. The detail algorithm is described in the following. First, we set a constant d to fix point numbers in and. Second, we assign the points to each triangle mesh according to area ratio between the mesh and the stripes. The area of the mesh and the stripes are corrected by area distortion representation. Finally, we compute the points number of stripes and . are the number of points’ pairs that satisfy the encode condition (i, j). The process is based on density point numbers in triangular mesh. In Figure 7, we show the computation process instance of.According to different combinations of i and j, we achieve a 33 measure matrix of two stripes. In , of different directions should multiply corresponding parameter to represent different degree of the distribution. In (6), we add the parameters to correct the measure matrix.
5.2. Stripe Regions Measure
Combining different from the different stripes of 2D face representation, we propose the stripe regions measure. In (8), and are the stripes in face F and and are the stripes in face. We achieve the similarity of and by measuring the stripes.In fact, there are not only two iso-geodesic stripes in a strip region. Therefore, we extend (8) to (11). The similarity measure of (11) is mathematical metric. It is determined by (9). means the weight of the stripes and in face similarity measure. The index of the stripes i and j should be adjacent. The reason is that the adjacent stripes’ relative positions are robust to facial expressions (see (12)). means the change of the points in stripes by facial expression.Based on the similarity measure between two stripe regions, the C2DWW measure result can be computed. We have introduced that the different stripe regions can be extracted from the nasal regions using different centres. Following the nasal regions, we extract different stripe regions and achieve the similarity measure result from corresponding stripes of two faces, which can be regarded as a “chain” measurement. In (13), the computation of C2DWW measure is proposed. means the C2DWW measure results between two faces F and . is stripe region measure of two faces and the stripes region is constructed based on the centre q.
Using the C2DWW measure, the face match results are achieved which can be used in face recognition directly. We evaluate the accurate of face recognition in this part. Two public facial databases, Texas3D and BosphorusDB, are used to evaluate our method identification rate. In Texas3D, there are 1149 scans from 118 persons with different facial expressions. Basically, the facial data in Texas3D are frontal face scans. In BosphorusDB, there are 4666 scans from 105 persons with different facial expressions, head poses, and occlusions. The experiments are constructed by three parts. Firstly, some parameters should be determined in our framework. Secondly, we evaluate the identity rate of different face recognition methods in public facial databases. Finally, we evaluate the identity rate in special facial data with different head poses and occlusions.
6.1. Parameter Configuration
In our framework, some parameters influence the accuracy of the facial surface measure method which determines the structure constructing performance. The number of the iso-geodesic stripes and the weights is shown in (11) and the stripe regions’ number is shown in (13).
Naturally, different numbers of iso-geodesic stripes in a stripe region influence the performance of the method. The additional stripes can improve the precision, but the time complexity of the algorithm will increase. When stripes are more than one certain quantity, the improvement of the precision is not obviously. Two dense stripes are increasing sensitivity by facial expressions. On the contrary, insufficient quantity of the stripes cannot address the requirement for facial similarity measure. We select about 200 scans from 100 persons to be the gallery set and 100 scans to be probe set in Texas3D. In Figure 8, we show receiver operating characteristic curves (ROC) results with different number of the stripes (S3 = 3, S4 = 4, S5 = 5, S6 = 6, S7 = 7). For the weights of (9), we choose a set of value (, , ) . We use single strip region which is constructed from the nasal tip. For the weights of (11), the general way is using the learning framework such as linear regression to optimal a set of weights. We just use the same weights in our frameworks because the weights from the learning methods may produce the locally optimal results. In certain applications, the weights can be computed according to the actual situation.
Another parameter that should be determined is the number of the stripe regions. Following the centre points in nasal bridge, the different number of stripe regions can be selected in the face recognition framework. In Section 3, we have introduced that the centres are achieved from the geodesic path between nasal tip and eyebrows tip which can be regarded as the nasal bridge curve. We divide the curve into equal segments; the equal points can be regarded as the centres. In Figure 9, we also show facial recognition ROC results in Texas3D with different number of the equal points (P1 = 1, P2 = 2, P3 = 3, P4 = 4, P5 = 5, P6 = 6. P1 means there have only one strip region in face and the centre is nasal tip. P2 means there are two stripe regions and the centres are nasal tip and eyebrows tip. Pn means there are n stripe regions and the centres include nasal tip, eyebrows tip, and n-2 end points in nasal bridge curve). In summary, the parameters with best identification rate are achieved (S5 and P4).
6.2. Facial Identification in Different Facial Expressions
In face recognition, the influence of facial expressions should be considered seriously. We evaluate the identification performance of our framework to other methods ([8, 11, 24, 28]), which also use the geodesic stripes or geodesic curves to construct the face recognition framework. The test sets are constructed from Texas3D and BosphorusDB. For Texas3D, we select the same test set which is introduced in parameter configuration part. For BosphorusDB, we select 6 samples from each person (totally 630) with different expressions to be the gallery set and 2 samples from each person (totally 210) without obvious facial expressions to be the probe set. We compute the Cumulative Matching Characteristics (CMC) and ROC result using different methods in the test sets. In Figure 10, we show the CMC and ROC results by different methods in the test set of Texas3D. In Figure 11, we show the CMC and ROC results by different methods in the test set of BosphorusDB. In Table 1, we show the evaluation results for different methods. The results show that our method can achieve better recognition performance for the facial data with different expressions.
6.3. Facial Identification in Different Head Poses and Occlusions
For facial data in BosphorusDB, there are different influence factors such as head poses and occlusions (hand and hair). Such influence factors should be considered in face recognition framework. For some facial scans with large head poses and occlusions, the facial data just have half facial surface for recognition process. In our framework, the C2DWW can achieve reasonable measure result from half face. The reason is that the face data has symmetry character and the relative positions in half face have similar value to whole face. We construct a subtest set from BosphorusDB to evaluate the performance of our method in half face. We select 6 scans from each person in BosphorusDB (totally 630) which include different head poses and occlusions to be the probe set and gallery set which are discussed in previous part. We extract the half face data from the probe set to evaluate the identification performance. In Figure 12, we show the CMC and ROC results.
The test set is constructed from Texas3D and BosphorusDB. In Table 2, we show the evaluation results for different methods in subtest set. The results show that our method can achieve better recognition performance for the facial data with different head poses and occlusions.
6.4. Performance Evaluation
In this part, we evaluate the performance of face matching speed by different methods. The methods [11, 24, 28] extract geodesic curves from facial surface to be facial features directly. The time cost of different curves matching is huge for face recognition process in a large facial database. The method  constructs geodesic stripes in facial surface and computes the relative positions of vertexes in different stripes. The geometric features are coded into a low dimensional feature vector and the face match process can be transferred to linear vectors computation. Our framework inherits the advantage of fast computation from method  and removes the influence of head poses and some occlusions. In Table 3, we show the average face searching time cost by different methods in different facial databases (the time cost do not include the preprocess time consume).
We propose a 3D face recognition method which is based on local conformal parameterization and iso-geodesic stripes analysis. Using the local conformal parameterization, we achieve the 2D face representation which remove the influence of head poses. Based on the iso-geodesic stripes, we extract the facial features from the 2D face representation and use the C2DWW to achieve the face match results. The advantages of our method are robust to different head poses, facial expressions, and some kinds of occlusions. However, the biggest limitation of our method is sensitive to occlusions in nasal region. The occlusions in nasal region break the nasal geometric features which affect the accurate of local conformal parameterization.
In future work, we will find a solution to solve the problem such as using a face modeling reconstruction to repair the geometric information of nasal region. We will consider more facial information such as eyes’ region and mouth’s region in face recognition framework. We will attempt to find more convenient facial features to construct face representation which does not require the facial surface searching such as geodesic path computation.
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This research was supported by the National Natural Science Foundation of China (no. 61702293). We also thank the facial database (Texas3D and Bosphorus3D) providers.
- J. Lu, Y.-P. Tan, and G. Wang, “Discriminative multimanifold analysis for face recognition from a single training sample per person,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 39–51, 2013.
- M. Yang, L. Van, and L. Zhang, “Sparse variation dictionary learning for face recognition with a single training sample per person,” in Proceedings of the 2013 14th IEEE International Conference on Computer Vision, ICCV 2013, pp. 689–696, Australia, December 2013.
- H. Mohammadzade and D. Hatzinakos, “Projection into expression subspaces for face recognition from single sample per person,” IEEE Transactions on Affective Computing, vol. 4, no. 1, pp. 69–82, 2013.
- L. Zhuang, A. Y. Yang, Z. Zhou, S. S. Sastry, and Y. Ma, “Single-sample face recognition with image corruption and misalignment via sparse illumination transfer,” in Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013, pp. 3546–3553, USA, June 2013.
- P. Zhu, M. Yang, L. Zhang, and I. Lee, “Local Generic Representation for Face Recognition with Single Sample per Person,” in Proceedings of the Asian Conference on Computer Vision, pp. 34–50, Springer.
- P. Zhang, X. You, W. Ou, C. L. Philip Chen, and Y.-M. Cheung, “Sparse discriminative multi-manifold embedding for one-sample face identification,” Pattern Recognition, vol. 52, pp. 249–259, 2016.
- A. M. Bronstein, M. M. Bronstein, and R. Kimmel, “Three-dimensional face recognition,” International Journal of Computer Vision, vol. 64, no. 1, pp. 5–30, 2005.
- S. Berretti, A. Del Bimbo, and P. Pala, “3D face recognition using isogeodesic stripes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 12, pp. 2162–2177, 2010.
- W. Zeng, D. Samaras, and D. Gu, “Ricci flow for 3D shape analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 662–677, 2010.
- H. Drira, B. Ben Amor, A. Srivastava, M. Daoudi, and R. Slama, “3D Face recognition under expressions, occlusions, and pose variations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 9, pp. 2270–2283, 2013.
- B. B. Amor, H. Drira, S. Berretti, M. Daoudi, and A. Srivastava, “4-D facial expression recognition by learning geometric deformations,” IEEE Transactions on Cybernetics, vol. 44, no. 12, pp. 2443–2457, 2014.
- P. Liu, Y. Wang, D. Huang, Z. Zhang, and L. Chen, “Learning the spherical harmonic features for 3-D face recognition,” IEEE Transactions on Image Processing, vol. 22, no. 3, pp. 914–925, 2013.
- H. Mohammadzade and D. Hatzinakos, “Iterative closest normal point for 3D face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 2, pp. 381–397, 2013.
- S. Berretti, N. Werghi, A. Del Bimbo, and P. Pala, “Matching 3D face scans using interest points and local histogram descriptors,” Computers and Graphics, vol. 37, no. 5, pp. 509–525, 2013.
- S. Berretti, N. Werghi, A. del Bimbo, and P. Pala, “Selecting stable keypoints and local descriptors for person identification using 3D face scans,” The Visual Computer, vol. 30, no. 11, pp. 1275–1292, 2014.
- H. Li, D. Huang, J.-M. Morvan, Y. Wang, and L. Chen, “Towards 3D face recognition in the real: a registration-free approach using fine-grained matching of 3D Keypoint descriptors,” International Journal of Computer Vision, vol. 113, no. 2, pp. 128–142, 2015.
- Y. J. Lei, Y. L. Guo, M. Hayat, M. Bennamoun, and X. Z. Zhou, “A Two-Phase Weighted Collaborative Representation for 3D partial face recognition with single sample,” Pattern Recognition, vol. 52, pp. 218–237, 2016.
- D. Huang, M. Ardabilian, Y. Wang, and L. Chen, “3-D face recognition using eLBP-based facial description and local feature hybrid matching,” IEEE Transactions on Information Forensics and Security, vol. 7, no. 5, pp. 1551–1565, 2012.
- D. Smeets, J. Keustermans, D. Vandermeulen, and P. Suetens, “MeshSIFT: local surface features for 3D face recognition under expression variations and partial data,” Computer Vision and Image Understanding, vol. 117, no. 2, pp. 158–169, 2013.
- Y. Lei, M. Bennamoun, M. Hayat, and Y. Guo, “An efficient 3D face recognition approach using local geometrical signatures,” Pattern Recognition, vol. 47, no. 2, pp. 509–524, 2014.
- P. Kamencay, R. Hudec, M. Benco, and M. Zachariasova, “2D-3D face recognition method based on a modified CCA-PCA algorithm,” International Journal of Advanced Robotic Systems, vol. 11, no. 1, 2014.
- P. Yan and K. W. Bowyer, “Biometric recognition using 3D ear shape,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 8, pp. 1297–1308, 2007.
- H. Chen and B. Bhanu, “Human ear recognition in 3D,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 4, pp. 718–737, 2007.
- H. Drira, B. B. Amor, A. Srivastava, and M. Daoudi, “A Riemannian analysis of 3D nose shapes for partial human biometrics,” in Proceedings of the 12th International Conference on Computer Vision, ICCV 2009, pp. 2050–2057, Japan, October 2009.
- M. Emambakhsh and A. Evans, “Nasal Patches and Curves for Expression-Robust 3D Face Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 5, pp. 995–1007, 2017.
- S. Z. Gilani, F. Shafait, and A. Mian, “Shape-based automatic detection of a large number of 3D facial landmarks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pp. 4639–4648, USA, June 2015.
- V. Surazhsky, T. Surazhsky, D. Kirsanov, S. J. Gortler, and H. Hoppe, “Fast exact and approximate geodesics on meshes,” in Proceedings of the ACM SIGGRAPH 2005, pp. 553–560, USA, August 2005.
- R. Ahdid, E. M. Barrah, S. Sa et al., “Facial surface analysis using iso-geodesic curves in three dimensional face recognition system,” 2016, https://arxiv.org/abs/1608.08878.
- M. Desbrun, M. Meyer, and P. Alliez, “Intrinsic parameterizations of surface meshes,” Computer Graphics Forum, vol. 21, no. 3, pp. 209–218, 2002.
- S. Berretti, A. Del Bimbo, and E. Vicario, “Weighted walkthroughs between extended entities for retrieval by spatial arrangement,” IEEE Transactions on Multimedia, vol. 5, no. 1, pp. 52–70, 2003.
Copyright © 2018 Chenlei Lv and Junli Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.