Abstract

Terrestrial laser scanning (TLS) technology is one of the most efficient and accurate tools for 3D measurement which can reveal surface-based characteristics of objects with the aid of computer vision and programming. Thus, it plays an increasingly important role in deformation monitoring and analysis. Automatic data extraction and high efficiency and accuracy modeling from scattered point clouds are challenging issues during the TLS data processing. This paper presents a data extraction method considering the partial and statistical distribution of the point clouds scanned, called the window-neighborhood method. Based on the point clouds extracted, 3D modeling of the boundary of an arched structure was carried out. The ideal modeling strategy should be fast, accurate, and less complex regarding its application to large amounts of data. The paper discusses the accuracy of fittings in four cases between whole curve, segmentation, polynomial, and B-spline. A similar number of parameters was set for polynomial and B-spline because the number of unknown parameters is essential for the accuracy of the fittings. The uncertainties of the scanned raw point clouds and the modeling are discussed. This process is considered a prerequisite step for 3D deformation analysis with TLS.

1. Introduction

Structural health monitoring is of significant importance for the historical and architectural structures for safety reasons [13]. An arched structure is one of the traditional forms in western architectures, such as in churches. As the buildings age, a safety risk will occur in the arched structures which bear the load of the construction. TLS is a highly accurate, fast, and efficient technology to acquire 3D point clouds of structures [4]. An experiment was designed to simulate the load and deformation process of an arched structure and TLS was used to measure the surface-based 3D point clouds.

Morphology-oriented analysis with TLS is a new direction for deformation analysis [5]. One of the issues remaining is to assure the accuracy during modeling for deformation analysis with TLS technology. The influence factors and significance of accuracy of free-form 3D modeling for the point clouds are not reported in detail in the current literatures.

This paper presents efficient extraction and discusses accurate modeling of the boundary of an arched structure, which are prerequisite steps for deformation analysis. Data extraction is carried out by the window-neighborhood method, which is closely related to the partial and statistical distribution of the scanned point clouds. In this paper, the mathematical functions of polynomial and B-spline are adopted to model the data. A polynomial has a deficiency in describing local features, but its advantages are efficiency and simplicity, while B-spline is exactly the opposite. Four cases are discussed, including a localized segmentation, to find an efficient and accurate modeling solution for the arched structure measured. Since the number of parameters has an influence on the fitting results, the numbers of parameters of polynomial and B-spline models are proximate to emphasize the model function.

1.1. Terrestrial Laser Scanning and Data Extraction

Terrestrial laser scanning (TLS), which can capture up to millions of points per second and with a linear accuracy in the millimeter range, is one of the most efficient tools to measure 3D objects and structures. Traditional methods for monitoring, for example, total station, inclinometer, accelerometer, and leveling are generally point-based measurements. Some low-cost sensors require preembedding or contact [6], which is difficult to maintain in the long period. TLS is area-oriented measurement which can obtain 3D coordinates information of dense point clouds. Furthermore, the measurement of TLS is contactless and not destructive. It is widely applied in the monitoring of natural and artificial constructions, for example, rock slopes, forests, coasts, and buildings [710].

One important task for the presteps of modeling is to detect or recognize features from the 3D data. Reference [11] extracted lithological boundaries from photographic images of rock surfaces by means of an interactive segmentation method. Reference [12] used a semiautomatic method of extracting urban road networks from point clouds to improve the detection of objects. However, automatic data extraction from point clouds data is still challenging.

1.2. Curve Fitting with Mathematical Functions

It is common to fit the data collected with geometric forms or mathematical functions [1316]. Because the point clouds from TLS measurement contain gaps, leaps, cusps, and so on, it is necessary to construct a fitting model to investigate the 3D deformation of a structure. Complicated spatial structures especially require fitting by free-form curves and surfaces [17].

Curve fitting is the process of constructing a curve or mathematical function with the best approximation to data points [17]. Curve fit usually means trying to find the curve that minimizes displacement of a point from the curve in vertical, orthogonal, or both axes directions [18].

Polynomial and B-spline are commonly used forms to fit a curve. Reference [19] adopted polynomial fitting with the least square method to detect the wheel set size in the railway transportation system. The polynomial surface was also considered to approximate concrete structures with gentle surfaces [2023]. B-spline is a generalization of the Bézier curve and can be further extended to a nonuniform rational B-spline, which has the advantage of constructing an exact model; therefore, it is suitable for adoption in the high accuracy TLS measurement. Due to the properties of B-spline curves, for example, flexibility and accuracy, they are popular when estimating complex objects. Ordinary B-spline approximation involves data parameterization, knot adjustment, and control point determination. The first step is to parameterize the points measured by means of equal division, chord length, and centripetal [24]. The second step aims at knot vector adjustment. A robust method for the optimal selection of the knot vector is studied by Bureick et al. [25]. The final step is the estimation of control points by means of the Gauss-Markov model [26].

1.3. Work Flow

This paper focuses on the statistical analysis of spatial multidimensional point clouds data, which is the basis for the deformation analysis of an arched structure. The work flow of this paper is presented in Figure 1. The TLS raw data is scanned by TLS and, subsequently, the point clouds are preprocessed to outline the arched structure and eliminate the surroundings, which are not the main subject of the deformation analysis.

The arch-shaped curve is then extracted to investigate the deformation behavior where the window selection method is adopted. The latter includes four steps: window definition, distribution analysis, threshold determination, and boundary extraction.

Using the resulting extracted data, point clouds fitting is performed for four kinds of combinations, which are explained later in Table 1. The fitting precision is then analyzed and the four cases are compared. Last but not least, the deformation model of the fitting parts will be generated and optimized.

2. Experimental Setup

An experiment was carried out to investigate the deformation behavior of an arched structure using TLS point clouds. The main components are the arched structure specimen, Z + F Imager 5006, load equipment, and two supports, which are shown in Figure 2. The continuous load lasted for 20 min, under which deformation consequently formed, and then stopped for 10 min for measurements. The load is within a safety range where deformation of the arched structure is generated, but crush and failure have not begun. The lower image in Figure 2 shows the experimental setup. A coordinated system was defined to investigate the deformation behavior of the arch based on highly accurate TLS, which is shown in the upper image in Figure 2, where the -axis direction is perpendicular to plane XOZ.

In this paper, the arch-shaped part of the object is vital for structural monitoring, because it bears the main load and shows significant deformation. The side surface of the object is taken into account for more precise deformation analysis. However, the arch-shaped object is occluded with some other objects, such as steel I-beams, and needs to be separated for more accurate analysis.

3. Data Analysis

3.1. Data Extraction

The main concern of the deformation analysis lies in the edge of the arched structure, with the motivation that the arched edge defines the border of the arched surfaces with significant deformation, and is well connected to photogrammetry, since it can also be detected from digital images. From this point of view, the edge of the arched structure should be extracted and fitted with a high accuracy taken into consideration. Point clouds are preprocessed to pick up the side surface of the arched structure, which is extracted, and the result is shown in Figure 3, with the adoption of 3D point clouds software.

As the laser beam rotates and scans with a uniform angular speed, footprints of laser beams have a theoretically close-to-even distribution considering a general surface. Suppose that each point of the surface carries a square selection window with a predefined size, which gives a reference to the neighboring points, there are mostly two cases. The first one is in the middle part of the surface, where those main points have almost the same number of neighbors, and such main points occupy the most population of all the point clouds. The second is in the boundary, with approximately half of the number of neighbors compared to the first case. A schematic diagram with a simple example is shown in Figure 4, where all the points represent footprint centers, which describe only the position, but not the real size of the footprints. The red point is the main point, the black ones are neighboring points, and the green and blue hollow circles are drawn for the possible relative position of the main point. The grey lines show the selection window and the red line is a simplified boundary. When the boundary inclines as dashed green and blue lines, that kind of position relationship can also be depicted with green and blue solid lines and hollow circles.

The neighboring numbers of the boundary points identified can only be within a certain range rather than an exact value, due to the inclination of the boundary line and the scattered distribution of points in the boundary area, as shown in Figure 5. The quantity Δn marked means the number of points which have escaped from the boundary, which will influence the number of neighbors shown as black points. The yellow points are virtual points, which, with the combination of black points, constitute a normal selection window as in the center of the surface.

According to the schematic diagram in Figures 4 and 5, the number of neighbors for boundary points should satisfy the function in (1), where is the actual number of neighbors which varies for different points, N is the number of neighbors with the maximum probability in defined-size square windows, is average neighbor number of boundary points, is statistical error, that is, the number of points rejected as boundary neighbors by relative inclination of the boundary line, and is the number of points on each side of the square window.

The size of the selection window has an influence on the boundary extraction. If the size is too small, the noisy point will be mistaken for a boundary point, and vice versa; if the size is too large, some boundary points will be lost. The thickness of the arched structure is 10 cm.

The threshold was defined as a value with which the boundary will be filtered and represent reality. The value of is determined by the actual distance of neighboring points of the point clouds. On the one hand, it is influenced by density of the point clouds depending on the scanning distance and angle, and the inner parameters of the TLS instrument used in the experiment; on the other hand, the local point clouds shapes formed by actual geometric shape of the boundary and occlusion of the point clouds also affect the value of threshold . Because the various factors have undetermined influences on the point clouds of the boundary, the threshold is analyzed explicitly by means of the distribution of neighboring point number by the MATLAB program. The actual number of neighboring points is distributed as shown in Figure 6, where the maximum probability corresponds to 136 neighboring points. The threshold theoretically locates at the point with the maximum distribution of the neighboring numbers.

Because the value of is vital to the result of extraction, it should be carefully calculated before picking up the boundaries; therefore, the distribution of adjacent points is investigated and the maximum value is selected. If the is too small, some valid boundaries will not be detected where the point density is large; on the contrary, if the is too large, the boundary points will contain more noise, which will influence the accuracy of 3D model badly. Since the laser spots are almost uniformly distributed and most of the point clouds are in a normal area other than boundaries, we chose with the maximum probability in Figure 6, which is 136. After filtering all the point clouds with (1), the boundary of the side arched surface will be extracted, as shown in Figure 7. The red point clouds correspond to the value 136 and the green point clouds to 126. Blue arrows point out that when the is too small, some valid boundaries are neglected. The value of the is not chosen to be larger, for example, 146, because that would cause too much noise.

The extracted edges contain not only the arched-shape curves, but also shadows of the occlusions. The boundary point clouds are imported to CloudCompare® for the purpose of separation of the arched-shape curves. The open-access software CloudCompare is an independent open source project and a free software for the processing of point clouds, which provides a set of basic tools to process 3D point clouds [27]. The step adopted here is polygonal segmentation. The lower edge of the point clouds is extracted, which comprises the point clouds extracted for 3D curve fitting.

3.2. Data Fitting

The extracted point clouds are approximated by both polynomial curve and B-spline. The 3D coordinates , , and can be, respectively, described as a function of the common variable to fit a polynomial curve. The estimation parameters for each coordinate are calculated by (2), where represents coordinates , , or , is the common variable, is the order of the polynomial, and are the parameters to be estimated with index from 1 to . The number of unknown parameters is for th order polynomial fitting.The functional model in (3) is described by the degree and the th degree B-spline basis functions are defined on the nonperiodic knot vector in (4), with number to fit a 3D B-spline curve. The control points are the unknowns. The relationship between degree , the number of control points , and the knot points is . The control points are the unknowns to be estimated, which can be obtained in a least squares sense by the minimizing ofand the restriction that the B-spline passes through the first and last points measured. are the measurements and are parameters for each , respectively, calculated by the chord length method. Furthermore, the knot vector is calculated with [23] The parameters chosen for B-spline in this case are and , where is the degree and is the number of control points. Because one direction of control points is estimated during each step; there are 22 unknown parameters in each estimation of the B-spline curve. The number of parameters matches well to that of polynomial, which is 21 parameters for 6th order approximation. This increases the reasonability of comparison between the two models, since the number of parameters has a significant influence on the accuracy of the fitting.

4. Results

The fitted curve of the polynomial and B-spline is presented in Figure 8, where the blue curve corresponds to the polynomial approximation and the red curve is B-spline fitting. The comparison between the polynomial and B-spline curves of epochs 3 and 9 corresponds to Figures 8(a) and 8(b), respectively.

Due to the displacement of -axis being essential, the focus is on the - plane. The coordinate range is [−2.75, −0.85] m in the -axis direction and [−4.35, −3.85] m in the -axis direction to reduce the redundancy. According to Figure 8, the B-spline and polynomial fitting are significantly different in areas A, C, and B, corresponding to the concave piece in Figure 8(a) and the convex piece in Figure 8(b), which reveal that the B-spline has an advantage at the mutated sections. Although high order polynomial fitting can alleviate this problem, it may, however, lead to Runge’s phenomenon, which is a problem at the edges of an interval that occurs when using polynomial interpolation with polynomials of a high degree over a set of equispaced interpolation points. This phenomenon was discovered by Carl David Tolmé Runge while exploring the behavior of errors when using polynomial interpolation to approximate certain functions [15].

The statistical errors of the fittings are considered to get a deeper insight into the fitting effects on the spatially scattered points. The polynomial approximation of the whole extracted line is defined as the original case/case 1 for the later comparison; the other cases are listed in Table 1, where P and B stand for polynomial and B-spline approximation, respectively.

The uncertainties of the polynomial curve fitted for epoch 3 (E3) and 9 (E9) are presented in Table 2, including deviation, standard deviation, and covariance, where C(x, y, z) means covariance with x, y, and z coordinate values, diagonal element is deviation, and SD stands for standard deviation.

The number of unknown parameters is considered to compare the B-spline and polynomial for spatial point clouds more impartially. The uncertainties of the B-spline for epochs 3 and 9 are presented in Table 2.

According to Table 2, the standard deviations for the polynomial in the -axis direction for epochs 3 and 9 are 17.51 and 12.98 mm, respectively, which are not balanced with the high accuracy of TLS measurement. Therefore, B-spline curve fitting is adopted and analyzed. For this, the standard deviations in the -axis direction for epochs 3 and 9 are 1.45 and 1.40 mm, respectively, where the accuracy is significantly improved and the comparison is listed in Table 4.

Occlusion of the laser beam in the experiment is unavoidable, because of the shelter of the loads equipment, which will cause fragmentation of the point clouds and then increase the uncertainties during the parametric estimation of the approximation. A segmentation fitting is employed for this perspective, which is presented in Figure 9, where epoch 9 is depicted.

The segmentation approximations are described in Figure 9, where the blue curve corresponds to the polynomial approximation, the green curve is an automatically plotted line, and the red curve is the B-spline fitting. The green points are raw point clouds and the blue stars are control points of the B-spline. According to the red ellipses in Figure 9, it is discovered that the B-spline is greatly improved rather than the polynomial fitting on the left side, which is succeeded through the control points and knots. However, the deviations of the two models are well matched on the right side. It is hinted that the quality of the B-spline is determined by control points and knots; thus, the optimization of the B-spline model is much more complex and costly than the polynomial. When the B-spline is used to fit a surface, the phenomenon is more prominent.

The uncertainties of polynomial lines for each epoch are listed in Table 4, where the 6th order polynomial function is advantageous through parametric model training.

The standard deviations in the -axis direction for epochs 3 and 9 are 2.0 and 1.8 mm, respectively, according to Table 3, and are obviously improved compared to the approximation of the whole boundary curve.

Similarly, the segmentation curve is fitted by the B-spline function. The uncertainties of the approximation are shown in Table 3, where the standard deviation in the -axis direction for epochs 3 and 9 corresponds to 0.68 and 0.71 mm, respectively. The fittings are improved based on the segmentation approximation.

Based on the uncertainties analysis in Tables 2 and 3, the standard deviation could be abstracted as in Table 4, which describes the degree of optimization of the B-spline fitting compared with the polynomial approximation.

According to the analysis, the optimization of segmentation is shown in Table 4, where both the polynomial and B-spline have a better approximation precision after segmentation. However, the segmentation improvement using the polynomial curve is more significant, with the magnitude of increase from 80.46 to 89.60% for the polynomial compared to from 41.10 to 68.51% for the B-spline.

It is also revealed that curve fitting results have a close relation with the accuracy and reliability of the scanned point clouds. The TLS employed is Imager Z + F 5006, whose scanning precision parameters are listed in Table 5. In this experiment, the laser scanner stands about 5 m away from the arch, and the arch with shadow can be considered as grey, which means that the range noise is 0.7 mm. It agrees with the result of B method of segmentation.

5. Conclusion

This paper focuses on the comparative analysis of the B-spline and polynomial approximation based on the extraction of multidimensional data which are collected by TLS during deformation analysis. An innovative window selection method is adopted to efficiently extract the edge data of the arch structure, where the partial and statistical distribution of the scanned point clouds are considered. An optimal extraction is chosen from the aspect of noise and blank of the extracted point clouds.

The B-spline and polynomial approximation are presented and analyzed with four cases, where conclusions can be drawn as follows:(a)For the experiment in this paper, the uncertainty ranking of the four cases is segmentation with B method (less than 1 mm), whole curve with B method and segmentation with P method (1-2 mm) and whole curve with P method (10–20 mm).(b)It is verified experimentally, through comparing the standard deviations of the two fitting methods, that the B-spline has a more satisfactory fitting precision, where the standard deviation of the whole edge curve of the B-spline fitting is about 90% better than the polynomial approximation.(c)However, in the case of segmented fitting, the accuracy improvement of the B-spline is not as much as the polynomial approximation. It is revealed that the B-spline has an advantage of better accuracy in complex situations, but it is also implied that the polynomial approximation can greatly improve the fitting accuracy by segmentation, simplifying the complex case.

It is indicated that, in the approximation of curves during deformation monitoring with TLS measurement, segmentation method can be adopted, where efficiency of polynomial and B-spline could be combined to construct a fast and accurate 3D model.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

The publication of this article was funded by the Open Access Fund of the Leibniz Universität Hannover. The authors also would like to acknowledge the support of Natural Science Foundation of Jiangsu, China (no. BK20160558) and Six Talent Peaks Project in Jiangsu Province (no. 2015-JZ-009). The authors wish to acknowledge the support of all the colleagues in Geodetic Institute of Leibniz University Hanover for their valid information and help.