Abstract

Building information modeling (BIM) in industrialized bridge construction is usually performed based on initial design information. Differences exist between the model of the structure and its actual geometric dimensions and features due to the manufacturing, transportation, hoisting, assembly, and load bearing of the structure. These variations affect the construction project handover and facility management. The solutions available at present entail the use of point clouds to reconstruct BIM. However, these solutions still encounter problems, such as the inability to obtain the actual geometric features of a bridge quickly and accurately. Moreover, the created BIM is nonparametric and cannot be dynamically adjusted. This paper proposes a fully automatic method of reconstructing parameterized BIM by using point clouds to address the abovementioned problems. An algorithm for bridge point cloud segmentation is developed; the algorithm can separate the bridge point cloud from the entire scanning scene and segment the unit structure point cloud. Another algorithm for extracting the geometric features of the bridge point cloud is also proposed; this algorithm is effective for partially missing point clouds. The feasibility of the proposed method is evaluated and verified using theoretical and actual bridge point clouds, respectively. The reconstruction quality of BIM is also evaluated visually and quantitatively, and the results show that the reconstructed BIM is accurate and reliable.

1. Introduction

Industrialized construction at present needs to consider not only the industrialization of the building construction process but also the building life cycle. The life cycle of bridges includes design, construction, completion, operation and maintenance, and demolition phases. The operation and maintenance stage is a crucial phase, which accounts for half of the total life cycle cost [1]. The National Institute of Standards and Technology reported that 68% ($10.6 billion) of the additional costs from inadequate interoperability in the construction industry are incurred by building owners and operators, and 85% ($9.0 billion) of these costs are generated in the operation and maintenance phase [2]. The promotion and application of building information modeling (BIM) technology provides benefits to traditional and industrialized (high-technology) construction. Compared to the application of BIM in traditional construction, BIM in industrialized construction can realize modular design and industrial assembly. The reasonable and effective application of BIM technology in the industrialized construction of bridges can result in large reductions in operation and maintenance costs and improvement of the status quo [3]. This research is dedicated to provide technical support for the industrialized construction of bridges, thus helping to accelerate the development of industrialized construction.

BIM facilitates information exchange for operation and maintenance processes in architecture, engineering, and construction industries. In bridge construction projects, BIM is usually reconstructed according to 2D drawings in the design stage (as-designed BIM). However, construction errors, construction changes, and several uncontrollable factors lead to inconsistencies between the bridge and the design [4]. Therefore, as-designed BIM cannot accurately reflect the actual geometric dimension and feature information of bridges. This inadequacy affects the accuracy of using BIM statistical engineering quantities and 3D finite element analysis. Hence, BIM cannot represent the original geometric deformation information nor support completion, acceptance, operation, and maintenance. Even though the BIM reconstructed based on 2D drawing is consistent with the geometric features of the bridge, the geometric features change as the service time of the bridge in the process of operation and maintenance phase [5]. The gap in the geometric features of the two gradually enlarges. Therefore, the reconstruction of parameterized and dynamically updateable BIM highly consistent with the actual geometric shape of bridges is essential for the completion, acceptance, operation, and maintenance of bridges in industrialized construction.

Research efforts in recent years have focused on reconstructing BIM with the aid of point clouds from terrestrial laser scanning (TLS) to cater to different needs [6]. TLS is a precise measuring equipment that can capture reality. The scanned point cloud data can reach millimeter-level accuracy and are as realistic as an image [7]. The model reconstructed in initial research is not a semantically rich BIM but a mesh model that cannot carry information. Although current research uses the point cloud plane recognition and extraction algorithm to reconstruct BIM, most of the reconstructed BIM are houses with a single structure type, and most of them are planar structures with walls and floors [8]. Studies on the reconstruction of bridge point cloud BIM are relatively few, and the BIM reconstructed by these studies is not parameterized. The geometric dimension accuracy of reconstructed BIM cannot meet certain requirements, and the BIM cannot be dynamically updated. The structure of a bridge is complex, and the cross-sectional shape and spatial position of its components are changeable compared with those of a house. In the same situation, complex structure types are likely to result in incomplete scans and loss of point clouds. Therefore, new algorithms need to be developed to solve various problems, such as quickly and automatically segmenting the unstructured point cloud of a bridge, accurately extracting a bridge’s geometric features when the point cloud is partially missing, and reconstructing a parameterized and dynamically updateable BIM based on geometric feature data [9].

To address this research gap, this study proposes a method of automatically reconstructing bridge parametric BIM by using point cloud data. One of the contributions of this work is the development of a novel point cloud segmentation algorithm. The algorithm has a simple principle, few parameters to be adjusted, and high segmentation accuracy, and it can maximize the avoidance of “over- and undersegmentation.” The algorithm separates the entire bridge point cloud from the scanned scene to remove the ground point cloud and then divides the bridge structure into units to obtain the point cloud of each component. A new point cloud geometric feature extraction algorithm that has a good boundary fitting effect is also proposed. It can perform high-precision boundary fitting even with local missing points on the point cloud boundary. To achieve the transition from point cloud to BIM and obtain a native BIM instead of a mesh model, a “link” must be formed between the point cloud and the BIM. This study uses the visual programming software Dynamo to handle the extracted point cloud geometric features [10, 11] and drive the modeling software Autodesk Revit to reconstruct and assemble the BIM components [12]. The rest of this paper is organized as follows. Section 2 introduces the research background of the three aspects of point cloud BIM reconstruction: point cloud segmentation, point cloud geometric feature extraction, and parametric modeling. Section 3 describes the implementation method and technical process of the entire point cloud BIM reconstruction. It also tests the proposed algorithm with the theoretical point cloud in completing theoretical point cloud segmentation, geometric feature extraction, and parameter modeling. Section 4 compares the developed method with an existing segmentation algorithm and uses an actual bridge point cloud to reconstruct its BIM, thus verifying the effectiveness of the method. Visual and quantitative analyses of the BIM accuracy of point cloud reconstruction are carried out, and the reliability of BIM reconstruction via this method is further demonstrated. Section 5 summarizes the research and clarifies the directions for future work.

In general, 3D reconstruction involves transforming point clouds into a 3D mesh model. This kind of model reconstruction is simple, but unlike BIM, it cannot carry information on the life cycle of buildings. BIM reconstruction must undergo the two key steps of point cloud segmentation and point cloud geometric feature extraction.

2.1. Point Cloud Segmentation

The point cloud obtained by any method is a collection of discrete points, and what is used in research is usually only one or a few subsets. Therefore, the point cloud must be segmented. Traditional point cloud segmentation is generally performed manually, but the efficiency is extremely low. Many researchers have attempted to automate the segmentation of point clouds in recent years. For example, Schnabel et al. [13] proposed a RANSAC algorithm to extract the rule part of point cloud data and proved that the algorithm performs well even in the presence of many outliers and high noise. Günther et al. [14] used an RGB-D camera to capture a series of 3D point clouds, reconstructed a semantic map of the indoor environment in an incremental and closed-loop manner, and performed model-based object recognition. Bogdan and Cousins [15] utilized the RANSAC method to segment messy desktop scenes in a house. The method performs well in terms of versatility and stability, but when the number of scene point clouds is large, the algorithm consumes a large amount of computation time. Rabbani et al. [16] proposed a cylinder detection algorithm that decomposes cylinder detection into two steps: orientation estimation and position and radius estimation. Ruodan et al. [17] introduced a top-down detection method for the detection of reinforced concrete bridge slabs, piers, pier caps, and beam components. This method uses a sliding algorithm to separate the beam and slab components from the pier components, but the method does not consider the segmentation of the scene point cloud around the bridge. Richtsfeld and Vincze [18] presented a segmentation algorithm that can be directly used in the point cloud itself. The algorithm is based on the principle of radial reflection and has good robustness to small object point clouds without scene information. George et al. [19] used features based on normal vector and point neighborhood flatness to segment large-scale scenic spots. The segmentation algorithm is suitable for the classification of scene objects but has certain limitations in the segmentation of bridge structural units.

These extant studies have achieved good segmentation results in their experiments. However, the practicability of the segmentation algorithms is limited, and very few suitable segmentation algorithms are available for the special structure type of bridges. Moreover, the purpose of point cloud segmentation in these studies is to reconstruct the parametric BIM to be in accordance with the actual geometric characteristics of bridges. The purpose of segmentation differs, which makes these segmentation algorithms nonuniversal. Several researchers have attempted to convert 3D point cloud data into depth images and used image segmentation algorithms for processing. However, a certain amount of point cloud data may be lost during the conversion of 2D images. This method is only suitable for small scene point clouds with a small amount of data and low-precision requirements. We need a segmentation algorithm that can be used to process large scenes, massive ground, and bridge point clouds. Such an algorithm is suitable for the point cloud segmentation of all similar bridge structure types, namely, simply supported beam bridges, continuous rigid frame bridges, and other beam bridges with simple structures. They are all featured as simple bridge structure and single structural component. They can also be used for the local structure segmentation of complex bridge structures.

2.2. Point Cloud Geometric Feature Extraction

Geometric feature extraction of the target point cloud is required in obtaining the geometric feature number of the target point cloud. For example, the boundary contour of the point cloud is extracted, the axis line of the point cloud is determined, and the plane of the point cloud is fitted. These steps are also the premise of point cloud reconstruction for parameterized BIM. Gumhold et al. [20] constructed a covariance matrix based on the measurement point field. Through a covariance analysis, the points were divided into crease, boundary, inflection, and plane points, and a minimum spanning tree was established for the various feature points to construct lines. Kim et al. [21] described a method of extracting road marking features from point cloud data and LiDAR sensor intensity information. This method is effective for uncalibrated LiDAR sensors. Ramamurthy et al. [22] extracted the geometric and topological features of a line segment from the 2D cross-sectional data of a 3D point cloud to conveniently “extract” design features, such as size, from the point cloud. However, the noise points and the roughness of the target surface complicate the extraction process. Verma et al. [23] developed an algorithm to detect building roofs and terrain surfaces by using a 3D connected component analysis to identify continuous smooth regions. Zhang et al. [24] proposed a scattered point cloud feature extraction method based on density space clustering that uses a new feature detection operator. However, the method is only effective for models with large differences in potential surface shapes. Deng et al. [25] presented a point cloud feature extraction method based on morphological gradients that can effectively extract large-scale, hole-like point cloud data features.

Previous research has studied the geometric feature extraction of point clouds of many basic structures but did not consider the accurate extraction of geometric information when the point cloud is partially missing. The geometric feature extraction algorithm proposed in this study can effectively solve this problem.

2.3. Point Cloud BIM Reconstruction

Currently, many methods can be used to reconstruct BIM from point cloud data. Gao et al. [26] created a BIM of ancient buildings by collecting point cloud data and extracting the features of these structures. Pepe et al. [27] completed the transformation from a point cloud to a 3D model by using a 3D modeling software and developed toolkit. However, the reconstructed model is not a real BIM. Jung et al. [28] completed the creation of the interior and exterior walls of buildings through point cloud segmentation and feature recognition. However, the created model has a single structure type. Danielle et al. [29] proposed the use of the data processing algorithm provided by the point cloud library and several functions from the extensible building information modeling toolkit. However, this method only considers BIM reconstruction of regular geometric planes (e.g., walls and floors) and thus has certain limitations. Thomson and Boehm [30] used 3D laser scanning technology to collect subway point clouds and developed a 2D vector map by using the point cloud slice of each station floor. They created a BIM by combining the vector diagram and field situation. Danielle et al. [31] proposed a system that converts indoor point clouds to BIM. The system can produce excellent results with a small amount of point cloud information, but it is mainly suitable for point clouds on flat surfaces, such as walls and floors.

The BIM reconstructed by these research methods has certain limitations, such as low degree of automation, unparameterized models, and inability to dynamically update based on data. This study develops a new technology to automatically reconstruct a bridge point cloud BIM and overcomes these limitations.

3. Methods

3.1. Overview

The proposed method has three parts. First, the unstructured point cloud is segmented. Second, slicing and geometric feature extraction of the segmented point cloud are performed. Lastly, the model is reconstructed based on the extracted geometric feature data. The first part mainly involves point cloud preprocessing and segmentation. In point cloud segmentation, the ground point cloud is removed first before segmenting the bridge point cloud. Element segmentation refers to dividing the superstructure and substructure of a bridge into separate component point clouds. The second part includes component slicing, boundary fitting of the point cloud slice, and calculation of the boundary intersection coordinates. The third part comprises the generation of the intersection coordinate data table, retrieval of the table through visual programming to rebuild components, and assembly of the components. The specific process is illustrated in Figure 1.

3.2. Point Cloud Segmentation

Before the reconstruction of the actual bridge point cloud, two different types of theoretical point clouds are tested to evaluate the entire technical process. These theoretical point clouds are regular point clouds without noise points that do not require preprocessing [32]. Figure 2(a) shows the first group of flat surface point clouds, which represent the ground and the bridge substructure and superstructure from bottom to top. For this type of point cloud, the stacking direction of the structure is parallel to that of the Z-axis of the coordinate system. The mutation of the number of point clouds in the unit interval along the direction of these clouds, which is called the interval density of point clouds, is used as the basis of point cloud segmentation.

The average density of point clouds and the interval value must be determined to avoid the over- and undersegmentation caused by excessively large or extremely small interval values. The concept of neighborhood query is used in the calculation of the average density. The point cloud is divided into N intervals along the stacking direction of the Z-axis of the point cloud model, and the length of each interval is set to n, as shown in Figure 2(b). Theoretically, a small n value is good. However, considering that the point cloud is a discrete point set, n should be greater than the average point density ρ to avoid adding null values to the region [33, 34]. Subsequently, with the help of the mature Kd-tree neighborhood query method, b points are randomly selected from the point cloud as query objects. Each query object Ki is taken as the center of the circle, and the circle with radius R is considered a range threshold. The value of R is an adjustable parameter. Different types of point clouds need to be tested to determine the most appropriate value because of the different scanning densities and modes. As shown in Figure 3, the number of points in the query range PKi and ρ should satisfy the following equation:

Through this calculation procedure, an n value that satisfies the conditions of the algorithm is selected, the number of point clouds in each interval length n is determined, and the interval density map along the stacking direction of the Z-axis is established, as shown in Figure 4. The resulting graph has three extreme points (1, 2, and 3), and each point represents the position with the largest number of point clouds in the Ni region. Given the same scanning environment and the same length of n interval conditions, the entire ground and beam bottom point clouds are divided into separate intervals. The point cloud of the column is a circle consisting of contour point clouds in a single interval, and the number of points is smaller than that of the ground and beam bottom point clouds. The number of points in each area from the bottom to the top of the column is almost the same. The corresponding positions of extreme points 1 and 2 denote the numbers of ground and beam bottom point clouds, respectively, and extreme point 3 represents the number of point clouds on the top of the beam.

The corresponding interval of extreme point 1 can be directly used to remove the ground, whereas that of extreme point 2 can be utilized to separate the column and the beam. The point cloud after segmentation is shown in Figure 5.

To approximate actual construction bridge point clouds, the second group of theoretical point clouds is regarded as a group of nonplane ground point clouds. Each point cloud consists of two double cylinders, one tie beam, and two supports, as shown in Figure 6. No flat ground point cloud exists in the scanned point cloud of the bridge. The processing of the point cloud of the nonplane surface is different from the processing of the abovementioned method because the point cloud in the interval at the junction of the column and ground has lost the features of extreme points. As a result, the ground point cloud cannot be separated using the density characteristics. Therefore, other methods need to be applied to remove such clouds.

The density map in the stacking direction of the Z-axis is calculated, and local element segmentation is conducted. Figure 6(b) shows five extreme points. Compared with the situation in the first group of experiments, the extreme points corresponding to the ground point clouds are missing, and the corresponding extreme points of the tie beam and support are added. Extreme points 1 and 2, which correspond to the bottom and top surfaces of the tie beam, respectively, appear at the 45th interval. Extreme point 3 represents the junction of the column top and the bearing. The segment point cloud represented by other extreme points is unchanged. As shown in Figure 7, the bent cap and support can be separated. For the segmentation of the remaining nonplanar surface and double cylinder point clouds, the point cloud projection filtering segmentation algorithm must be applied to remove the former.

The first step in the automatic segmentation algorithm (Algorithm 1) of point cloud projection filtering is the projection of the nonplanar ground point cloud of the double cylinder to the xoy coordinate plane. This step promotes the rapid increase in the density of the point clouds at the boundary of the column, as shown in Figure 8. The m domain points of each point P are determined, the average distance from each point to its domain point is calculated, and the distance distribution is statistically analyzed. The corresponding mean value μ and standard deviation σ are obtained under the assumption that the distribution is Gaussian. If the distance between the domain point and the point exceeds μ + Tσ, where T is the threshold value of the standard deviation multiple that needs to be adjusted according to the actual situation, then the point is marked as an outlier. The formula is shown in equations (2)∼(5) [35, 36]. After multiple sets of experimental tests, when the value of m is 50, 100, and 200 and the value of T is in the range of 0.1 to 1, the segmentation effect presents the best. The appropriate m and T values can be selected from the above range of values according to the actual situation of the point cloud:where denotes the distance matrix, is the Euclidean distance between a point and its neighborhood points, and is the average distance between each point P and its neighborhood points [37].

    Input: initial point cloud Datan×3; neighborhood value m; threshold T;
   Output: processed point cloud PCn-i× 3
(1)Datax=Datan× 3 (:, 1);
(2)Datay=Datan× 3 (:, 2);
(3)PC= [Datax, Datay];//Project the point cloud of the precast box girder to the y direction.
(4)Neighbor search (PC);//Find m neighborhood points of each point in PC.
(5)Dn×m-1;//Construct the distance matrix.
(6)n×1=Dn×m-1× [1/m]m-1×1;//Calculate the average distance between each point and its domain point.
(7)μ, σ;//Calculate the mean and standard deviation of the distribution.
(8)for i=0: n do
(9)  if i+;//Determine the points with average distances that are greater than the threshold.
(10)  then
(11)  n-i×1=n×1\i;//Delete eligible distances.
(12)  PCn-i×3;//Determine the coordinates after the segmentation.
(13)  end if
(14)end for
(15)return PCn-i×3

The values of m and T are set to 50 and 0.5, respectively. The point cloud outside the boundary of the outer column is removed in the first iteration. Therefore, the original point cloud corresponding to the nonflat ground before point cloud projection is also deleted, and only the point clouds of the column and tie beam projection remain. To separate the column and tie beam point clouds, the threshold value is adjusted to 0.3, and another iteration is performed. The middle tie beam point cloud is then removed, leaving only the column point cloud. After saving the two cylindrical point clouds, the coordinate range projected by this point cloud is regarded as a judgment condition, and the point cloud saved in the first iteration is judged based on this condition [38]. The tie beam can be completely divided after removing the point cloud within the coordinate range, and this process is called backsubstitution, as shown in Figure 9.

The segmentation results of the two groups with different types of theoretical point clouds imply that the segmentation algorithm that combines the concept of interval density and projection filtering has a good effect on theoretical point cloud segmentation.

3.3. Point Cloud Geometric Feature Extraction and Modeling

After the segmentation of the second group of point clouds, the overall shape of the theoretical point cloud resembles two types of simple geometric structures, namely, cylindrical and cuboid. To facilitate the calculation, the two structures are sliced to transform the 3D problem into a 2D problem to a certain extent [39, 40], as shown in Figure 10.

Before slicing the point clouds, the appropriate slice thickness Δ, which is related to the average density ρ of the point clouds, should be determined. However, because point cloud projection filter segmentation has a certain impact on point cloud density, ρ should be recalculated before point cloud slicing. In general, Δ should be greater than ρ. Zi and Zj represent the z-coordinate values of the point clouds on the upper and lower surfaces of the slices, respectively. The least squares method can be directly used to fit the slice of the cylinder point cloud and obtain the feature data of the center coordinates and the radius of each slice [41, 42]. The z-coordinate value of the center of each slice is the difference between the average values of Zi and Zj. Figure 11 shows the axis fitting of the point coordinates of the theoretical cylinder point cloud. The fitting points do not fluctuate because the point cloud is theoretical [43].

The bent cap point cloud is adopted as an example of the cuboid structure. The point cloud is sliced into rectangular sections, as shown in Figure 12. Given that a part of the point cloud of the actual bridge beam body is a polygon, the coordinate data of the four corners of the rectangular slice should be obtained to establish the cap beam model. To rapidly extract the corner coordinates of shapes with four or more edges, this study proposes a boundary fitting algorithm for extracting corner data. The algorithm is divided into the following steps:(1)A random angle α is rotated around the Z-axis in the counterclockwise direction, and a seed point P1 is randomly selected.(2)The point P2 nearest to P1 is determined and used as the new seed point. After traversing all points during the turn, the points are sorted in accordance with Euclidean distance.(3)The appropriate grid size L is selected, and the number of points in grid N is calculated. Then, the assessment conditions are set, and the grid points that are less than M are skipped. The random sample consensus algorithm is used to fit the grid points with the composite conditions, and the corresponding slope Ki is calculated.(4)All Ki values are obtained and sorted. The line equation of each side is derived using the slope and known points. Then, the intersection coordinates of two adjacent lines are calculated, and α is rotated clockwise around the Z-axis. Finally, the corner coordinates of each slice are obtained.

After completing the feature data extraction of the theoretical point cloud, the visual programming software Dynamo is used to retrieve the feature data. The BIM modeling software Revit is utilized to complete the parametric creation of the points, lines, and volumes, as shown in Figure 13. Given that the reconstructed model is created in Revit, it supports parameter adjustment and material property assignment, thereby addressing the limitations of current reconstruction methods and realizing the theoretical transformation from a point cloud to BIM [44, 45].

4. Experiments

A construction bridge located in a high-altitude area in China is selected as the object of this experiment, and an industrialized construction method is adopted. Considering the actual situation of the construction site, the gully part with six spans is taken as the scanning target. Each span has four standard precast box girders with a length of 20 m. The gully consists of 24 precast box girders, 10 single columns, 5 tie beams, and 7 capping beams. The surface layer of the precast box girder is uneven and has numerous sundries because the wet joints and deck pavement of the bridge are not completed. Therefore, this layer is not scanned.

4.1. Experimental Overview

A FARO Focus3d X 330 3D laser scanner is used in the experiment. Scanning is conducted at night to avoid capturing images of various moving objects in the scanned data during daytime. Eight scanning stations are arranged in advance in accordance with the terrain and environment of the site, as shown in Figure 14. The scanning parameters and scanning time for each station are consistent.

The registration accuracy of the point clouds of each station can reach 2 mm when the target ball is used. Preliminary noise reduction and point cloud format transformation are performed. During the exportation of point clouds, preliminary sampling must be conducted to avoid exporting an excessive amount of point cloud data; a large amount of exported data can affect the subsequent algorithm process. The resulting point cloud after the preprocessing is depicted in Figure 15.

To remove the ground point cloud, the vertical interval density of the entire bridge is calculated to realize the position segmentation of the upper and lower structures of the bridge, as shown in Figure 16. As a result, the upper part point cloud no longer participates in the calculation, and the time efficiency can be improved.

During the removal of the ground point cloud, m and T are set to 100 and 0.2, respectively, and four loop iterations are performed. Seven point clouds of the edge contour of the cap beam and the contour point clouds of the edges of the tie beam and column are obtained. In addition, the ground point cloud is removed, as shown in Figure 17. Two loop iterations are performed during the division of the double columns and tie beam, and the values of m and T are 100 and 0.3, respectively.

In the division of the tie beam and double column, the m value is 100, and the T value is 0.3. The ground points are deleted after the first iteration calculation, and the tie beam point cloud is deleted with the same parameters in the second iteration; thus, only the double column point cloud remains, as shown in Figure 18. The projection coordinate range is replaced with the initial point cloud to segment the tie beam, as shown in Figure 19.

The segmentation effect of the point cloud projection filtering algorithm is verified. Specifically, the segmentation algorithm called “region growing algorithm based on normal vectors and curvature” (RGNC) is used to segment and compare the first and second groups of theoretical and actual bridge local point clouds. The RGNC algorithm has multiple parameters. After numerous parameter tuning experiments, two sets of segmentation results of the RGNC algorithm are obtained, as shown in Figure 20. Figures 20(a) and 20(b) show that the RGNC algorithm has a good segmentation effect on the theoretical point cloud, but a certain loss in the structure edge point cloud is observed. This segmentation is structured. For example, the cover beam point cloud is divided into six faces instead of one cover beam as a whole. Figure 20(c) shows that the RGNC algorithm does not perform well in the segmentation of the actual bridge point clouds, and serious over- and undersegmentation occur.

4.2. Experiment Results

Obtaining several hidden structures in the bridge, such as the supports at the lower part of the precast box girder, the beam at the end of the precast box girder, and the internal cavity structure of the precast box girder, is difficult because of the limitation of the actual scanning environment. The hidden structures mainly pertain to the bearing on the pier cap and the beam at the end of the prefabricated box girder. These structures are relatively small in size. Therefore, the hidden structure model established using the 20 m standard precast box girder slightly affects the calculation of the engineering quantity and the force analysis of the entire bridge. The creation sequence is as follows: precast box girder, cap beam, column, and tie beam. Figure 21 shows the interactive reconstruction process of the BIM of the construction bridge in Dynamo and Revit.

To check the original coordinate matching between the reconstructed BIM of the bridge and its point cloud and the actual reconstruction effect, the point cloud of the construction bridge before segmentation is imported into Revit after format conversion. The blue point cloud in Figure 22 is the point cloud before segmentation. The top model of the precast box girder can be seen in Figure 22(a) because the point cloud at the top of the precast box girder is not scanned and is therefore uncovered. As shown in Figure 22(c), the point cloud on the side of a single precast box girder does not completely coincide with the model. This phenomenon occurs because the boundary data of the extracted point cloud slice are obtained by fitting the best straight line; thus, avoiding the slight concavity and convexity on the side of the precast box girder is difficult. The enlarged view of Figure 22(d) shows that the overlapping effect of the capping beam is considerable. Overall and local inspections indicate that the real BIM constructed in the reconstruction has a high degree of coincidence (almost completely coincident) with its point cloud.

Performing an intuitive analysis of the degree of fit between the point cloud and the model is insufficient in determining the actual accuracy of the point cloud BIM reconstruction. To verify and quantify this accuracy, the bridge substructure and precast box girder are randomly extracted with the corresponding point clouds for a 3D comparative analysis. The 3D comparison of the bridge substructure and precast box girder is illustrated in Figure 23.

The results of the 3D comparative analysis indicate that the standard deviations of the precast box girder and bridge substructure are less than 0.02 m, and the average positive and negative deviations are less than 0.015 m, as shown in Table 1. The overall deviation of the bridge substructure is slightly larger than that of the precast box girder. The large local deviations in several parts can be ascribed to three reasons: (1) the relatively dense number of lofted contours during model creation, (2) the format conversion among the models in the software, and (3) the error of the boundary fitting algorithm.

5. Conclusions and Prospect

The goal of this study is to quickly reconstruct a BIM that conforms to the geometric features of the bridge. The key point is to automatically segment the bridge point cloud with scenes and extract the geometric features of the bridge point cloud accurately. To achieve the research goal, a point cloud projection filter segmentation algorithm was proposed to realize the automatic segmentation of bridge point clouds. A geometric feature extraction algorithm was also developed. The visual programming software Dynamo was used as the link between the point cloud and BIM. The effectiveness of the algorithms was initially tested through the segmentation of a theoretical point cloud and BIM reconstruction. Afterward, the algorithm and the entire technical process were verified by using actual bridge point clouds, and bridge point cloud scene segmentation and unit segmentation were realized. By fitting and extracting the segmented point cloud slices and geometric feature parameters, the geometric parameters of each structure of the bridge point cloud were obtained. The parameters were then used to reconstruct the parametric BIM of the bridge.

To verify the accuracy and quality of the reconstructed BIM, the model and its point cloud were visualized and quantitatively evaluated. The results showed that the degree of overlap between the BIM and point cloud was relatively high. The standard deviations of the bridge pier and prefabricated box girder obtained through a quantitative comparison were less than 0.02 m, the average positive and negative deviations were less than 0.015 m, and the overall deviation was within the controllable allowable range. The reconstructed BIM exhibited high accuracy, reliability, and quality. In summary, the rapid reconstruction of parameterized BIM through point clouds not only increases the geometric consistency between BIM and the actual structure but also provides technical support for the industrialized construction of bridges.

However, this study has drawbacks that need to be solved in future works. First, the versatility of the algorithm when dealing with complex steel truss bridges, such as segmenting a large number of I-steel point clouds with staggered positions while completing model creation, should be improved. Second, a new software system that integrates point cloud processing and BIM creation should be developed in the future. Solving these problems would further accelerate the development of industrialized bridge construction.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors thank all survey participants of the paper.