Resampling to Speed Up Consolidation of Point Clouds
Processing of large-scale scattered point clouds has currently become a hot topic in the field of computer graphics research. A supposedly valid tool in producing a set of denoised, outlier-free, and evenly distributed particles over the original point clouds, Weighted Locally Optimal Projection (WLOP) algorithm, has been used in the consolidation of unorganized 3D point clouds by many researchers. However, the algorithm is considered relatively ineffective, due to the large amount of the point clouds data and the iteration calculation. In this paper, a resampling method applied to the point set of 3D model, which significantly improves the computing speed of the WLOP algorithm. In order to measure the impact of error, which will increase with the improvement of calculation efficiency, on the accuracy of the algorithm, we define two quantitative indicators, that is, the projection error and uniformity of distribution. The performance of our method will be evaluated by using both quantitative and qualitative analyses. Our experimental validation demonstrates that this method greatly improves calculating efficiency, notwithstanding the slightly reduced projection accuracy in comparison to WLOP.
As a popular topic of growing interest in the fundamental research of computer graphics, reverse engineering in reconstructing 3D models from unorganized point cloud data has been considered by many authors and various results on surface reconstruction from point clouds have been published in recent years. Various methods have been proposed in this regard, including RBF-based approaches [1–4], integration of Voronoi diagrams and variational method , Poisson surface reconstruction technique , and smooth signed distance method . The local moving least squares (MLS) surface by Levin  and its variants have been proven to be a powerful surface representation of point set data. Guennebaud and Gross proposed algebraic point set surfaces (APSS)  and Öztireli et al. proposed a robust implicit MLS (RIMLS) variant to project a point set (or a mesh) onto the MLS surface .
During the past two decades, preprocessing of unorganized point cloud data has received considerable attention mainly in virtue of their theoretical importance as well as the extensive applications. Many problems still have yet to be addressed, however. In particular, it is noteworthy that the 3D surface represented by unorganized point clouds is typically noisy, as it contains holes, with high variations in point density caused by acquisition errors or misalignment of multiple scans. The preprocessing for surface reconstruction includes denoising, outlier removal, thinning, orientation, and redistribution of the input points (see, e.g., [11–15]), which has resorted to resampling for point cloud consolidation. WLOP (Weighted Locally Optimal Projection) operator  and LOP (Locally Optimal Projection) operator  have proven better immunity against noise and outlier of raw scanned data, in addition to their advantage in creating evenly redistributed point clouds. However, these methods are found limited in preserving geometrical features of the model in the projection (see [16, 17]). Such limitations, however, need to be further addressed with concrete quantitative evidence, which is obtainable via computationally expensive reconstruction of large point set data. The open problem we intend to solve in this paper is the slow computing speed caused by multiple iterations in previously proposed models and limitation in results obtained from reconstructing large point set data due to the mathematical complexity.
In this paper, we present the resampling method for point clouds processing, a new and more efficient organization of data. We downsample the point set with -nearest neighbor algorithm and the normal vectorial angle, which reduces the amount of computation that is required, and upsample the computing result using WLOP. A numerical simulation on accuracy, uniformly distributed setting, and computing time is given to illustrate the applicability. Our method can reduce the noise and create evenly redistributed points like WLOP. For more experiments we demonstrate that time complexity is significantly reduced using our method.
Four main contributions of this paper can be summarized as follows.(1)We review the application and progress of WLOP algorithm for geometry reconstruction.(2)We design an optimization process to reduce the computational complexity of WLOP. In the proposed algorithm, the surface points are classified by the -nearest neighbor algorithm, and the normal vector is used to choose the point from every group to each subsampled point set. The WLOP operator is applied to the point clouds of every subset. This approach speeds up the computation two to three times.(3)We contribute to the scarce literature on quantitative analysis for point cloud processing, offering a definition of the projection error indicators reference to the method of error estimation analysis of point cloud simplification algorithm and the uniform distribution indicators reference to the method of point density.(4)Our proposed Resampling WLOP approach has been evaluated on models with various shape complexities and noise levels. The effectiveness and performance of our method are validated and illustrated through experimental results and comparison with the WLOP method.
The rest of this paper is organized as follows. Section 2 reviews the related studies in the consolidation of unorganized 3D point clouds from which the proposed method is derived. Section 3 provides a detailed description of the resampling method. The evaluation criteria are described in Section 4. The proposed method is implemented and evaluated against the WLOP algorithm in Section 5, followed by the concluding comments in Section 6.
2. Related Work
In this section, we briefly review the application and research of Weighted Locally Optimal Projection (WLOP) algorithm among the various methods hitherto developed for the processing of point clouds. Lipman et al. proposed a parameterization-free projection operator, that is, Locally Optimal Projection (LOP), for geometry reconstruction reminiscent of the well-known multivariate median problem . This method is robust to noise, outliers, and nonuniformities of raw scanned data. LOP operates well on raw data without relying on a local parameterization of the points or on their local orientation. Given an unorganized set of points , LOP defines a set of projected points by a fixed point iteration where, given the current iterate , , the next iterate is to minimizewhere and , with and .
However, LOP may not work well when the distribution of the input points is highly nonuniform. Huang et al.  improved this operator by incorporating locally adaptive density weights into LOP, to improve its ability to deal with nonuniformity. The improved operator named as WLOP (Weighted Locally Optimal Projection) is supported by a robust normal estimation method which they presented to assign consistent normal to each projected point.
The projection for point iswhere , and where and are adaptive density weights.
Many surface reconstruction algorithms can greatly benefit from the output of their consolidation framework. Huang et al. presented a method to assign consistently oriented normal vectors to unorganized points with noises, nonuniformities, and thin sharp features compared with the consolidation approach . They presented a framework that distributes points by inserting samples into sparse regions, using the WLOP operator which performs downsampling and relaxation . Huang et al. altered the LOP operator and made it normal or edge-aware, allowing for resampling away from edges .
Although LOP and WLOP are both considered effective approaches to reconstruct geometry, several drawbacks are not to be dismissed, for instance, limitation in guaranteeing the smoothness of projection and in the reconstruction of models which contain sharp features. Li et al. extended the WLOP operator by incorporating the normal information into the WLOP operator . In this modified approach, neighbors across sharp features are regarded as outliers in normal space; thus they are excluded from the projection to keep the features intact. Li et al. later suggested incorporating a normal mollification step into the extended WLOP operator to get a more accurate result. Liao et al. presented an efficient and feature-preserving Locally Optimal Projection Operator (FLOP) for geometry reconstruction , which is based on a bilateral-weighted LOP, attending to information concerning both spatial and geometric features. FLOP has been greatly accelerated by using the random sampling of the Kernel Density Estimate (KDE) and extended for efficient and faithful reconstruction of time-varying surface reconstruction. FLOP method preserves the features better than LOP and WLOP operators. However, it fails to accurately compute normal vectors for sparsely sampled models with sharp features and heavy outliers.
Moving least squares (MLS) is a very attractive tool to design effective meshless surface representations. Öztireli et al. reviewed robust implicit MLS surfaces in terms of local kernel regression. RIMLS representation can handle sparse sampling, generate a continuous surface with better preserved fine details, and can naturally deal with any kind of sharp features with controllable sharpness . In a similar vein, we validate our method for surface reconstruction using RIMLS representation.
3. Resampling WLOP
3.1. Resampling WLOP Schema
The WLOP algorithm suffers from high computational costs, due to the large amount of the point clouds data and the iteration calculation. Recently more attention is focused on how to speed up the computing time of this algorithm. Liao et al. had greatly accelerated the computing time of FLOP . The subsampled point set by using the random sampling of the Kernel Density Estimate (KDE) is a feature-aware approximation to the original input point set . The idea is similar to that in this paper. However, the number of is smaller than the number of . The performance of their method which is actually data simplification for point cloud depends upon the simplified calculation ratio. The number of input point set is divided into two subsets but not to be reduced by using our method. And so the comparisons between our method and the approach in  cannot be made directly. Existing solutions for speeding up the process of point clouds using hierarchical-based method are demonstrated in . Diez et al. proposed a novel technique named as Hierarchical Normal Space Sampling (HNSS) to speed up coarce matching algorithms for point clouds. Two sets of points were grouped to points hierarchical according to the distribution of their normal vectors. The data hierarchy is traversed using RANSAC algorithm applied to point matching for every corresponding hierarchy level of two point sets. Our resampling method involves only one set of points. And so the comparisons between our method and the approach in  cannot be made directly.
In this section, we present the Resampling Weight Locally Optimal Projection, for a new and more efficient method for data organization. The computational complexity of the multivariate median problem, which remains a bottleneck of WLOP, is superlinear in the number of input points . The purpose of organizing data is to reduce the number of input points in each calculation. Data after being downsampled is traversed using WLOP algorithm for every subset. The Resampling WLOP schema is shown in Figure 1 and described in detail below.
Step 1. We focus on the decomposition of set , resulting in two subsets: , . The number of points on each subsampled point set is the same. Sections 3.2 and 3.3 will demonstrate how to choose points on each subset.
Step 2. We use the WLOP algorithm for each of subsets , sequentially, resulting in , .
Step 3. We create multiple output results by iterating over each subset. After iterations, we can get , .
Step 4. The ultimate output point set is the sum of the , . This may be regarded as an upsampling procedure.
3.2. Simple Downsampling
Let , be two subsets of . The main objective of this study is to determine a downsampling method that brings a similar subset of . The similar surface characteristics could have been carried on the points of each subsampled point set to reduce the projection error caused by the resampling algorithm.
We classify the surface points according to their -nearest neighbor. We randomly choose from . And the nearest neighbor points of are , which are determined by KD-Tree algorithm. Let be grouped into the first subset . is grouped into the second subset .
3.3. Precise Downsampling
To make the result more accurate, we choose the points to each subset from every group taking into account the surface normal vectors. The data downsampling method is described as follows.
Step 1. We randomly choose from . And the -nearest neighbor points of are , which are determined by KD-Tree algorithm. Let be grouped into the first subset .
Step 2. Let be the normal of point and is the normal of point :where is the normal vectorial angle between and . And is the normal vectorial angle cosine between and .
If is a minimum value of , , that is, is a minimum value of , , we regard that has the similar characteristics as . Then is grouped into the second subset .
Step 3. and are removed from . We will repeat the first step until become the null set.
The point set is downsampled by the nearest neighbor algorithm without taking into account the surface normal vectors. This is a Simple Resampling WLOP method in Section 3.2. We randomly choose from . Our goal by using our downsampling method is to find a more accurate and feature-aware approximation of to . Neighborhood is a factor to be considered; the normal vectorial angle is also a factor to be considered. In this paper, the points set is divided into two subsets. So we choose and the -nearest neighbor points of are . has been chosen by comparing with and according to Step 2. If the points set is divided into three or more subsets by using the downsampling algorithm, we will choose . This is a Simple Resampling WLOP method. In this paper we only focus on two subsets cases in the division of subsets primarily because experimental result has shown that too many subsampled point sets can lead to increasing projection error. Comparison of two resampling methods is shown in Section 5.
4. Projection Criteria
In this section, we analyze the impact on the accuracy of the projection algorithm. We define two quantitative indicators, error estimation and uniformity of distribution.
4.1. Error Estimation
Pauly et al. designed a method for estimating numerical and visual error between a simplified point set and its original point set by resampling points on an MLS local surface . The distance function of a smooth surface , which assigns to a point the shortest distance from to , has been investigated by Pottmann and Hofer . The following point-to-surface distance approximation is a variant of the previously adopted Taylor approximant of the distance function [22, 23], which approximates the local shape of at the normal footpoint (the point on that has the shortest distance to ) by a tangent plane rather than a quadric surface.
LOP and WLOP operators are actually the simplified methods for point clouds data. To make the distribution of points more uniform, the number of projected point clouds is smaller than that of original point clouds as presented in . Therefore, we refer to error estimation analysis of point cloud simplification algorithm. However, how to compute the distances efficiently and accurately in the context of point cloud simplification still remains a challenge.
The quality of the simplification can be measured by a suitable error metric, such as the one-sided Hausdorff distance [24, 25], the two-sided Hausdorff distance [21, 26], the quadratic error metric [27, 28], and the minimum distance. The evaluation metrics in the one-sided Hausdorff distance family, such as the Hausdorff fraction, the Hausdorff quantile, and spatially coherent matching, are also commonly applied to measurement of the difference between two surfaces in a large number of applications . Differences in definitions of the point-based surfaces and of error metrics will result in different simplification error values.
Given two finite point sets (representing a model image) and (representing a test image), Hausdorff distance is defined as where .
The function is called two-sided Hausdorff distance and the function is called one-sided Hausdorff distance. Hausdorff distance measures the degree of similarity between one point set and another. Dissimilar to most of alternative methods of shape comparison, Hausdorff distance is not based on finding the corresponding mode. Thus, it is more tolerant towards the perturbations of point locations, since it deals with proximity more than with exact superposition. However, Hausdorff distance is highly sensitive to outliers.
We suppose an original point set as and the point set after projection as . Let , be a nearest neighbor point of , in . Then the minimum distance of , to the original point sets isIt is shown as a solid line in Figure 2.
We define the one-sided Hausdorff distance as follows:
To reduce the sensitivity of outliers and better evaluate the similarity between the two point sets, we propose other evaluation criteria using minimum distance error metric. We define Sum Error Value (SEV) as follows:
Sum Error Value is the sum of minimum distance between , and , , which can reflect and give the projection error for all points on the point set. Regardless of the evaluation criteria, when the value is less, the projection error is smaller.
4.2. Uniform Distribution
Uniformity of point clouds can be represented by the density. There are two major approaches to the expression of density: (i) point clouds density extraction method based on block. In this approach, all points in cloud are distributed on the surface. Therefore, the number of points per unit area can be regarded as the density of three-dimensional point cloud. And there is (ii) point clouds density extraction method based on distance. An alternative of the expression of point clouds density is the distances among cloud points. The distance between the point clouds is smaller, more concentrated distribution of the points, and therefore the density of the point cloud is relatively large. From a statistical point of view, variance is a powerful tool for uniform distribution. In this paper, data distribution uniformity is measured by using the distance variance of the point clouds. The uniform distribution of point clouds is better with the distance variance value being smaller.
Let and . Let be a -nearest neighbor of , , in . Then the Euclidean distance between and a point of its -nearest neighbor in is
The average distance between and its -nearest neighbor is
The distance variance between and its -nearest neighbor is
The average distance between all points in and its -nearest neighbor is
So we define the Local Uniform Distribution (LUD) as follows:
We define the Global Uniform Distribution (GUD) as follows:
LUD reflects Local Uniform Distribution of the point in its neighborhood range. GUD reflects Global Uniform Distribution of the surface.
5. Experiments and Result
We compare the numerical results of our method using different established models with various shape complexities and noise levels: the faceYo model, the foot model, the egea model, the rocker-arm model, and the Stanford Armadillo model. We reconstruct the model using the RIMLS method implemented on MeshLab developed by the Visual Computing Group . This section presents empirical evidence from tests on a desktop computer with a 2.93 GHz Intel Core i3-320 CPU and 2 GByte of RAM. We implemented our prototype program in Visual C++ 2010.
After WLOP projection, the position update consists of two terms. The first term attracts the particle to the given point set. The second term is the repulsion term which repulses the particles away from other particles to make the point set more uniform. and are two user-selected parameters, where controls the repulsion force to distribute points uniformly in and is the support radius to determine the size of the influence neighborhood. However, these two parameters are manually specified by trial and error in the existing literature [14–16]. Only Liao et al. employed a bilateral weight average algorithm to decide which was adaptively determined with respect to local features .
Using the above two quantitative indicators, we analyze the main parameters of WLOP algorithm quantitatively. With the increase of , the accuracy of the projection decreases gradually and the point distribution on the surface is more uniform. In our experiments, we use the repulsion parameter as . An exceedingly large value of might cause oversmoothed results; contrarily, if the value of is too small we cannot achieve the desired results either in . However we found that the change of two quantitative indicators is small with different values of when . We set the same as that in , where is the diagonal length of the bounding box of the input model and is the number of points in . Not only can we compare the projection result after each iteration calculation with that of the previous calculation throughout quantitative analysis of evaluation, but also we can expect to choose the appropriate iteration times. Details of the data analysis are omitted here.
5.1. Computational Time
Whatever downsampling method is chosen, computational time of Resampling WLOP is the same. By contrast, the time consumed in resampling is far less than in WLOP. The number of points in each test data set and the resulting computational time consumed in using WLOP and Resampling WLOP are listed in Table 1. We set acceleration ratio as , where is the computational time for Resampling WLOP and is the computational time for WLOP. The results in Table 1 illustrate that adopting Resampling WLOP can reduce the computational time down to 24% to 52% of the original time using WLOP. Figure 3 illustrates results of comparison on point cloud model of the faceYo with 13746 points. Figure 4 illustrates results of comparison on point cloud model of the Armadillo with 172974 points. It shows that, by using our method, the computational time is significantly reduced. And with the increase in the number of points, the efficiency of speeding up the point clouds process is improved by our method. Hence the proposed method of Resampling WLOP is a very efficient method in cases when a comparatively large amount of points needs to be projected.
5.2. Projection Error
The results of comparison between WLOP and Resampling WLOP in terms of the two quantitative indicators are shown in Tables 2 and 3. The projection error estimation is shown in Table 2. Table 3 illustrates the results of point uniform distribution.
Our experimental validation in Section 5.1 demonstrates that our method greatly improves calculating efficiency. But through comparison result in Tables 2 and 3, the accuracy of the projection is reduced using resampling method according to two quantitative indicators. The noise may be generated using our method. These errors include Sum Error Value, Hausdorff distance, Local Uniform Distribution, and Global Uniform Distribution. It is shown that, adopting the precise resampling method, the projection error estimation is less and the point uniform distribution is better than the Simple Resampling method.
Taking the faceYo model as an example, the Hausdorff distance by using Precise Resampling WLOP is increased to 6.8% (from 1.8049 to 1.9284) compared to that by using WLOP, and the Sum Error Value by using Precise Resampling WLOP is increased to 11.6% (from 4464.9 to 4986.1) compared to that by using WLOP in Table 2. Table 3 illustrates that the Local Uniform Distribution by using Precise Resampling WLOP is increased to 21.6% (from 0.2326 to 0.2829) compared to that by using WLOP, and the Global Uniform Distribution by using Precise Resampling WLOP is only increased to 0.37% (from 2.6604 to 2.6744) compared to that by using WLOP. However, Table 1 and Figure 3 illustrate that the computational time by using Resampling WLOP is decreased to 51.8% (from 56.84 ms to 29.42) compared to that by using WLOP.
The reconstruction of the faceYo model by one iteration of WLOP is presented in Figure 9(a), and that of Precise Resampling WLOP is presented in Figure 9(b). The reconstruction of the Armadillo model by one iteration of WLOP is presented in Figure 10(a), and that of Precise Resampling WLOP is presented in Figure 10(b). The projection error of Resampling WLOP stays in a perfect or unaltered condition compared with WLOP method. And it is shown that the reconstruction quality remains satisfactory, with well-preserved features and fairly distributed points using Resampling WLOP from the contrastive analysis.
5.3. Effect of Iterations
In order to get an improved reconstruction, we conduct six iterations for the faceYo model and the Armadillo model by WLOP, Simple Resampling WLOP, and Precise Resampling WLOP. As shown in , with increasing iteration times, the consolidation of point clouds of WLOP algorithm is better. Our goal is to compare two quantitative indicators between Resampling WLOP and WLOP. The test result illustrated that the two indicators with iteration times increased are similar in either algorithm. So we use 6 iteration times.
The result of the comparison among three algorithms in terms of projection error and the uniform distribution value is shown in Figures 5–8. With increased iteration times, the sum value error becomes smaller regardless of the projection method being used. However, the Hausdorff distance value increases slightly because of the noise generated by algorithm error. Variation trends of point distribution by Resampling WLOP are similar to those by WLOP. It is also shown that adopting the precise resampling method, the projection error estimation is less and the point uniform distribution is better than the Simple Resampling method.
(a) Sum Error Value (SEV)
(b) Hausdorff distance
(a) Sum Error Value (SEV)
(b) Hausdorff distance
The reconstruction of faceYo models created by six iterations of WLOP and Precise Resampling WLOP is shown in Figures 9(c) and 9(d). The reconstruction of Armadillo models created by six iterations of WLOP and Precise Resampling WLOP is shown in Figures 10(c) and 10(d). Because of the noise generated by algorithm error with increased iteration times, the reconstruction result of the faceYo model by Precise Resamping WLOP has not been satisfactory. However, it is shown that the reconstruction of Armadillo model remains satisfactory, with well-preserved features and fairly distributed points using Resampling WLOP. It is seen that the effect of projection error and uniform distribution by resampling algorithm is gradually decreased as the number of the points of the model increases.
The WLOP operator achieves better point distribution and noise-resistance than Resampling WLOP. However the computational cost of Resampling WLOP is significantly reduced as shown in Section 5.1. If the iteration method is used, the effect of our acceleration algorithm becomes more remarkable.
6. Conclusion and Future Work
In this paper we introduce an improved algorithm for consolidation of point clouds. It is based on classical WLOP, considering resampling algorithms. We also present empirical evidence for its effectiveness and advantages in comparison to the classical WLOP algorithm.
Besides, our implementation is evaluated on models with various shape complexities and noise levels. The experimental results show that the computation time consumed by using Resampling WLOP is about one-half of that by using WLOP. A combination of quantitative and qualitative analyses is performed to further explore the relationship between the point projection and accuracy of the surface reconstruction. The resampling processing method is proven to enjoy a distinct advantage when the number of point clouds is very large.
However, one major potential drawback of the model is the observed increase in error and noise as a result of the reduction in computation cost. In this regard, several avenues for further research can be proposed, for instance, (i) further improvement in the quality of the projection, (ii) adaptation of the Resampling WLOP model to various kinds of point-based models with different surface properties, and (iii) improvement in the robustness of the algorithm.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work was supported in part by the Key Project of the National Natural Science Foundation of China (no. 61134009), the National Natural Science Foundation of China (nos. 61473077, 61473078), Cooperative Research Funds of the National Natural Science Funds Overseas and Hong Kong and Macao Scholars (no. 61428302), Program for Changjiang Scholars and Innovation Research Team in University from the Ministry of Education (no. IRT1220), Specialized Research Fund for Shanghai Leading Talents, Project of the Shanghai Committee of Science and Technology (nos. 13JC1407500, 11JC1400200), Innovation Program of Shanghai Municipal Education Commission (no. 14ZZ067), and the Fundamental Research Funds for the Central Universities (no. 2232012A3-04).
J. C. Carr, R. K. Beatson, J. B. Cherrie et al., “Reconstruction and representation of 3D objects with radial basis functions,” in Proceedings of the Computer Graphics Annual Conference (SIGGRAPH '01), pp. 67–76, August 2001.View at: Google Scholar
P. Alliez, D. Cohen-Steiner, Y. Tong, and M. Desbrun, “Voronoi-based variational reconstruction of unoriented point sets,” in Proceedings of the 5th Eurographics Symposium on Geometry Processing (SGP '07), pp. 39–48, 2007.View at: Google Scholar
M. Kazhdan, M. Bolitho, and H. Hoppe, “Poisson surface reconstruction,” in Proceedings of the 4th Eurographics Symposium on Geometry Processing (SGP '06), pp. 61–70, 2006.View at: Google Scholar
Y. Miao, P. Diaz-Gutierrez, R. Pajarola, M. Gopi, and J. Feng, “Shape isophotic error metric controllable re-sampling for point-sampled surfaces,” in Proceedings of the IEEE International Conference on Shape Modeling and Applications (SMI '09), pp. 28–35, IEEE, Beijing, China, June 2009.View at: Publisher Site | Google Scholar
B. Li, W. Jiang, Z. Cheng, G. Dang, and S. Jin, “Feature preserving consolidation for unorganized point clouds,” in Proceedings of the 1st IEEE International Conference on Progress in Informatics and Computing (PIC '10), pp. 892–895, IEEE, Shanghai, China, December 2010.View at: Publisher Site | Google Scholar
H. Pottmann and M. Hofer, “Geometry of the squared distance function to curves and surfaces,” in Visualization and Mathematics III, H.-C. Hege and K. Polthier, Eds., pp. 221–242, Springer, New York, NY, USA, 2003.View at: Google Scholar
R. Klein, G. Liebich, and W. Strasser, “Mesh reduction with error control,” in Proceedings of the 7th conference on Visualization (VIS '96), pp. 311–318, San Francisco, Calif, USA, October 1996.View at: Google Scholar
M. Garland and P. S. Heckbert, “Surface simplification using quadric error metrics,” in Proceedings of the Conference on Computer Graphics (SIGGRAPH '97), pp. 209–216, August 1997.View at: Google Scholar
M. Garland and Y. Zhou, “Quadric-based simplification in any dimension,” ACM Transactions on Graphics, vol. 24, no. 2, pp. 209–239, 2005.View at: Google Scholar
Group T.V.C. Meshlab@ONLINE, 2012, http://meshlab.sourceforge.net/.