Abstract

A new kind of generation method is proposed in this paper to acquire range images for complicated polyhedron in 3D space from a series of view angles. In the proposed generation method, concept of three-view drawing in mechanical cartography is introduced into the range image generation procedure. Negative and positive directions of -, -, and -axes are selected as the view angles to generate the range images for complicated polyhedron in 3D space. Furthermore, a novel iterative operation of mathematical morphology is proposed to ensure that satisfactory range images can be generated for the polyhedron from all the selected view angles. Compared with the existing method based on single view angle and interpolation operation, structure features contained in surface of the complicated polyhedron can be represented more consistently with the reality by using the proposed multi-view-angle range images generation method. The proposed generation method is validated by using an experiment.

1. Introduction

With development of instrument science and technology, a new generation of such kind, laser scanning, emerged to acquire 3D surface characteristics of measured objects with high spatial and temporal resolutions [1]. During scanning procedure, dense high-precision laser points can be obtained from measured object [2]. The obtained laser points contain a large amount of surface texture information of the measured object [3]. Since physical contact will not happen between laser scanning instrument and measured object, high-precision measurement for uncontactable objects, for example, objects with flexible surface, objects with radioactivity, or objects with very high temperature, has become possible [46]. Furthermore, unlike using traditional photogrammetry-based methods, common interference factors, for example, insufficient illumination, colors of measured objects, and optical distortion of lens, almost have no impact on measurement accuracy in implementation process of laser scanning [7, 8]. With such advantages, nowadays, in the field of high-precision and noncontact measurement, laser scanning plays a more and more important role [9, 10].

Since laser scanning data are becoming an important source to implement rapid and high-precision contactless measurement in many practical research fields, for example, reverse engineering of complex casting or forging parts in the process of mechanical metallurgy industry, digital protection and information retention for historical relics, and surveying and mapping of complex terrain and landform, extraction of high-precision structure information of measured object from raw data of laser scanning, that is, point cloud, turns into a great challenge [1113].

In order to detect the complete surface structure information, a holonomic point cloud is required to be obtained from the measured object in the scanning procedure. In the holonomic point cloud, laser points should be distributed on the entire surface of the measured object, that is, top surface, bottom surface, and flank surfaces of the object. In real applications, for a measured object with complicated surface structure, it is difficult to acquire a holonomic point cloud for the object from a certain view angle using laser scanning instrument. Generally, multisite scanning mode is utilized to acquire holonomic point cloud of the object. Point clouds obtained for the object from different view angles are combined to acquire the holonomic point cloud for the object. Usually, distribution of laser points in the holonomic point cloud is very anomalous [14]. In addition, volume of laser scanning raw data is normally quite large in practical situations. Consequently, it is very difficult to directly extract the high-precision surface structure information from the raw data of laser scanning. Customarily, in preliminary stage of raw scanning data processing, range image is acquired for the object from the raw data. With the aid of digital image processing technology, surface structure information of the measured object is estimated from the obtained range image in advance [15, 16]. Subsequently, on the basis of the estimated information, point cloud of the object can be finely classified and analyzed [17]. It can be deduced that the quality of obtained range image has a great influence on the final processing result of the point cloud.

Till now, several range image generation methods have been proposed [1820]. These range image generation methods mainly aim to generate range image for the measured object from single view angle with the aid of gridding and interpolation operations. For the measured objects with relatively simple surface structure, desirable range image can be obtained by these methods. Such kind of range image generation methods had been studied and employed into the processing procedure of remote sensing point cloud by the authors, and commendable application results were acquired [21]. However, for the measured object with relatively complex surface structure, a number of planes or curves on surface of the object are concealed by other planes or curves on surface of the object from the view angle. Using the method based on interpolation operation, position information of laser points located in the concealed planes or curves on surface of the measured object will inevitably disturb the quality of generated range image in the overlapped region. Moreover, since only one view angle is employed, structure information of the concealed planes or curves will be absent in the generated range image.

In order to overcome drawbacks of the range image generation methods based on single view angle and interpolation operation, a new kind of method based on mathematical morphology is proposed in this paper to generate multi-view-angle range images for complicated polyhedron in 3D space. Concept of three-view drawing in mechanical cartography is introduced into the range image generation procedure for complicated polyhedron in 3D space. From the multiple reasonably selected view angles, laser points on surface of complicated polyhedron can be suitably processed by using gridding and mathematical morphology operations. Structure information contained in surface of complicated polyhedron can be well represented in the generated multi-view-angle range images. Using digital image processing methods, surface structure information, for example, position of boundary lines, elevation gradients, curvatures, and roughness, can be estimated from the generated multi-view-angle range images for the measured complicated polyhedron. Using the estimated surface structure information as the prior knowledge, point cloud of the measured complicated polyhedron can be finely classified and segmented. High-precision structure information can be acquired from the classification and segmentation results. It can be deduced that the utilization of the proposed method will promote the application of laser scanning technologies in the related fields.

2. Methodology

2.1. Challenge

As aforementioned, when the method based on single view angle and interpolation operation is utilized to generate the range image for complicated polyhedron in 3D space, a number of planes or curves on surface of the polyhedron may be concealed by other planes or curves on surface of the polyhedron from the selected view angle in all probability. As a result, two negative consequences emerge. The first one is as follows: since a number of planes or curves are concealed by other planes or curves on the surface of the measured object from the selected view angle, the concealed planes or curves will be absent in the range image generated from the selected view angle by the method. The second one is as follows: since the interpolation operations are utilized, the position information of laser points in the concealed planes or curves will disturb the generated results in overlapped regions of concealing and concealed planes or curves in the selected view angle to a great extent.

A practical instance is employed to illustrate the aforementioned two negative consequences. In the instance, point cloud obtained from a typical polyhedron model, which is known as “fandisk,” is adopted. The 3D view and vertical view of the point cloud and the model of “fandisk” are shown in Figures 1(a) and 1(b), respectively. Top surface of the model is denoted as plane A and indicated by red real line. Four flanks of the model are denoted as curve B, curve C, plane D, and plane E and indicated by yellow, green, blue, and pink dotted lines, respectively. Negative direction of -axis is selected as the view angle in range image generation procedure for “fandisk” model. As shown in Figure 1(b), in the selected view angle, four flanks of the model, curve B, curve C, plane D, and plane E, are completely concealed by top surface of the model, plane A.

Technically speaking, in a satisfactory range image generated for the model from the selected view angle, four concealed flanks of the model should be absent, and gray level distribution of pixels in region corresponding to top surface of the model should be smooth and monotonic. Using the method based on single view angle and interpolation operations, range image is generated for the model, as shown in Figure 2. As indicated by green dotted lines, four regions in top surface of the model, which are overlapped with four concealed flanks of the model from the selected view angle, are quite unsmooth and not coincided with the expectation. It can be deduced that high-precision structure information cannot be rapidly and reasonably estimated for the model from these four regions in the range image, no matter which kind of image processing method is employed. Consequently, in the proposed generation method, such kind of disadvantage of the method based on single view angle and interpolation operation must be avoided.

2.2. Flow of Proposed Generation Method

Flow chart of the proposed generation method is shown in Figure 3. By means of laser scanning technology, point cloud is obtained from the complicated polyhedron located in 3D space. Then, the first view angle is selected for the generation procedure of range images. In the selected view angle, gridding operation is implemented to the point cloud of complicated polyhedron. Laser points in the point cloud are classified by the grids constructed on a certain plane in 3D coordinate system, and range values of laser points are converted to range values of constructed grids from the selected view angle. Using mathematical morphology operation, range image is acquired for the polyhedron on the basis of range values of the constructed grids. Subsequently, a judgement is implemented to find out whether the number of selected view angles is enough. If the number of selected view angle is not enough, one more view angle should be selected, and the aforementioned operations will be iteratively implemented to generate the range images for the polyhedron until the number of selected view angles meets the requirement. Finally, range images generated from all selected view angle are outputted as the ultima results, that is, multi-view-angle range images, for the polyhedron.

2.3. Selection of View Angles

In range image generation procedure of the propose method, 3D structure information of the polyhedron needs to be reasonably projected into a set of 2D reference planes. Thus, a series of appropriate view angles should be selected for the projection in advance. As aforementioned, in order to comprehensively describe 3D structure information of the complicated polyhedron in generated range images, concept of three-view drawing in mechanical cartography is introduced in the proposed generation method. Six view angles in 3D coordinate system, which are corresponding to negative and positive directions of -, -, and -axes, respectively, are utilized in the proposed method. According to customary rule in the mechanical cartography, six view angles employed in the proposed method are denoted as vertical view, upward view, right view, left view, back view, and front view. As shown in Figure 4, the employed view angles correspond to negative direction of -axis, positive direction of -axis, negative direction of -axis, positive direction of -axis, negative direction of -axis, and positive direction of -axis, respectively.

From the six employed view angles, 2D views of “fandisk” model are obtained and shown in Figures 5(a)5(f), respectively. In the obtained 2D views of “fandisk” model, gray levels of points on surface of the model are coincided with the range values of the points on surface of the model in the six employed view angles. It can be seen that all the planes and curves on surface of the “fandisk” model can be found in the obtained 2D views. It can be deduced that all the planes and curves on surface of the model can be found in a series of generated range images as well, if the range images are reasonably generated for the model from the aforementioned six view angles.

2.4. Gridding Operation

In the range image generation procedure, position information of laser points in point cloud of the complicated polyhedron should be projected into a series of 2D images. In order to appropriately set the resolution of these 2D images, gridding operation is implemented to the point cloud of complicated polyhedron. Correspondingly, grids constructed by using the gridding operation are regarded as the pixels in these 2D images.

Firstly, from the selected view angles, laser points in point cloud of the polyhedron are mapped into plane, plane, and plane, respectively. Then, grids are constructed in the planes, and laser points in point cloud of the polyhedron are divided into the grids, as shown in Figure 6. Laser points mapped into the planes are indicated by solid points in the figure. Side length of the constructed square grids is denoted as .

According to distribution of mapped laser points in the planes and range values of the laser points along the mapping directions, range values of constructed grids can be acquired by using following assignment rules:(a)When only one mapped laser point is divided into a grid, range value of the grid is assigned the range value of the mapped laser point.(b)When more than one of the mapped laser points are divided into a grid, range value of the grid is assigned the mean value of range values of the mapped laser points.(c)When no mapped laser point is divided into a grid, range value of the grid is assigned the minimum value of range values of all the mapped laser points.By means of the aforementioned assignment operations, 3D position information of laser points in point cloud of the complicated polyhedron is converted into range values of grids in a series of 2D images. Regarding the grids as pixels in images and regarding the range values of the grids as gray values of pixels in the images, range images of laser points in point cloud of the polyhedron in all selected view angles are obtained.

It can be deduced that side length of the constructed grids, that is, , is a crucial parameter in the whole gridding operation. On the one hand, if is set unduly large, too many mapped laser points may be divided into the grids. In this case, local characteristic on the surface of complicated polyhedron is normally inconspicuous in the obtained range images. On the other hand, if is set unduly small, too many grids will be constructed in the gridding operation. In this case, computation time of gridding operation is commonly unsatisfactory. In real applications of gridding operation, is usually assigned the the mean value of distances between laser points and the nearest points of them in the mapping plane to make the constructed grids contain only one mapped laser point in them as much as possible.

In this study, in order to find out the appropriate value of , distributions of laser points mapped into plane, plane, and plane are analyzed, respectively. The number of laser points in point cloud of complicated polyhedron is assumed as . The th () laser point in the point cloud is denoted as and coordinates of the laser point are represented as . When the laser point is mapped into plane; the mapped laser point is denoted as ; coordinates of can be represented as . Similarly, another mapped laser point in the plane can be denoted as ( and ) and coordinates of are represented as . Distance between and in the plane is denoted as and can be calculated asFor the th mapped laser point, , minimum distance between it and the other laser points mapped into plane is denoted as and obtained asCorrespondingly, mean value of minimum distances of all laser points mapped into plane is denoted as and obtained as

Similar operations are implemented to laser points mapped into plane and plane. The th () laser points mapped into plane and plane are denoted as and . Coordinates of and are represented as and , respectively. Similarly, other mapped laser points in plane and plane are denoted as and ( and ). Coordinates of and are represented as and , respectively. Distance between and is denoted as and obtained asDistance between and is denoted as and obtained asMinimum distance between and the other laser points mapped into plane is denoted as and minimum distance between and the other laser points mapped into plane is denoted as . They can be calculated asFurthermore, mean values of minimum distances of all laser points mapped into plane and plane are denoted as and , respectively. They can be obtained as

In order to make the comparison of range images generated from the different view angles more convenient, resolutions of pixels in the generated 2D images are set to be identical. In other words, side lengths of grids constructed in plane, plane, and plane are assigned the same value. In this study, side length of grids in all mapped planes, , is assigned the mean value of , , and asUsing , square grids are regularly constructed in plane, plane, and plane. Then, laser points are mapped into the planes and range values of grids are acquired from the range value of the mapped laser point by using the aforementioned assignment rules. By regarding the regularly constructed grids as pixels in images and regarding the range values of the grids as gray values of pixels in the images, range images of laser points in point cloud of complicated polyhedron can be generated from all the selected view angles.

2.5. Mathematical Morphology Operation

By means of gridding operation presented in previous section, range images of laser points in point cloud of complicated polyhedron can be acquired from all the selected view angles. Range values of laser points on the concealed planes and concealing planes from the selected view angles are both converted to the range values of corresponding grids in the acquired range images of laser points. If the traditional interpolation operation is employed to generate the range images for complicated polyhedron from the acquired range images of laser points, the second negative consequence presented in Section 2.1 will inevitably emerge in the overlapped regions of concealing and concealed planes in the generated range images; that is, generated results in overlapped regions will be unsmooth and anomalous. In order to avoid the appearance of such kind of faultiness, mathematical morphology operation is introduced to replace the traditional interpolation operation in the proposed method.

It is well known that mathematical morphology is a theory and technique on the basis of set theory, lattice theory, topology, and random functions [22, 23]. With the aid of complete foundations of mathematics, the idea and approach of mathematical morphology make a far-reaching influence on development of theory and technology of image processing [24, 25]. Nowadays, mathematical morphology is widely utilized in numerous relevant issues in the fields of image analysis and processing, for example, image denoising, image coding and compression, character feature recognition, biomedical or microscope image analysis, robotic vision, and industrial process detection [26, 27].

An instance is employed to illustrate the difference between range value estimated by traditional interpolation operation and range value estimated by mathematical morphology operation for the grid in the overlapped region of concealing and concealed planes. As shown in Figure 7, the th grid is located in an overlapped region and the center of the grid is indicated by a red dotted line. Negative direction of -axis is selected as the view angle. From the selected view angle, since the concealed plane is concealed by the concealing plane, the expected estimation result of range value of the th grid should be the range value of the concealing plane at the position of center of the th grid, which is indicated by a yellow triangle in the figure and denoted as .

In Figure 7, there are three laser points close to the th grid. Position information of two laser points located on the concealing planes is indicated by gray dots and denoted as and , and position information of one laser point located on the concealed plane is indicated by a black dot and denoted as . Using traditional interpolation operation, range value of the th grid can be estimated from the position information of the laser points close to the grid. In Figure 7, the estimated range value is indicated by a blue triangle and denoted as . Since position information of the point located on the concealed plane, , is taken into account in the interpolation operation, the range value estimated for the th grid is much smaller than expected range value, . In the generated range image, a hollow will exist in the position of the th grid from the selected view angle.

However, when mathematical morphology operation is utilized to estimate the range value of the th grid, position information of the laser point located on the concealed plane, , cannot be taken into account with the aid of a suitable structure element. In other words, range value of the th grid is estimated only from position information of two laser points on the concealing plane, and . As a result, the range value estimated for the th grid can be very close to the expected estimation result of range value of the th grid, . Compared with the traditional interpolation operation, it can be deduced that the mathematical morphology operation is more suitable for estimating range values of grids in range images for complicated polyhedron.

In real applications of mathematical morphology, a structure element is utilized as a “probe” to seek, acquire, or process the detailed structure features of objects in the images. By using the designated structure element, four basic operations of mathematical morphology can be implemented. Generally, the operations are called dilation, erosion, open, and close, and denoted as , , ○, and ●, respectively. In this paper, the designated structure element is denoted as . For an image denoted as , the dilation results and erosion results of pixel () in image can be obtained asWhen the center of structure element is located at position of pixel (), the maximum value of pixels in the overlapped region of image and element is extracted as the dilation result of pixel () in the image , and the minimum value of pixels in the overlapped region of image and element is extracted as the erosion result of pixel () in the image .

Correspondingly, the open and close operations for pixel () in image are expressed as follows, respectively:When image is firstly processed with element by the erosion operation and then processed with an identical element by the dilation operation, the whole operational process is denoted as open operation in mathematical morphology. Correspondingly, when image is firstly processed with element by the dilation operation and then processed with an identical element by the erosion operation, the whole operational process is denoted as open operation in mathematical morphology. In actual implementation, open operation is normally used to eliminate isolated points outside the objects in images, and close operation is normally used to replenish the hollow regions inside the objects in images.

Using gridding operation presented in previous section, a series of range images of laser points are generated for the complicated polyhedron from the selected view angles. However, due to the irregularity of distribution of laser points, a large number of hollow regions exist in the generated range images of laser points. On the basis of different behaviors of open and close operations, in this study, the close operation is introduced to replenish the hollow regions existing in range images of laser points.

In actual applications of close operation, two key issues must be taken into account. The first issue is as follows: which kind of structure element is suitable for processing range images of laser points? As aforementioned, close operation is used to replenish the hollow regions existing in range images of laser points in this study. Since the shape of “disk” structure element is strictly centrosymmetric in the 2D plane, the area replenished by using “disk” structure element should be larger than the areas replenished by using other structure elements when the scales of elements are identical. In other words, if the scales of structure elements are the same, replenishment efficiency of “disk” structure element is better than replenishment efficiencies of other structure elements. As a result, in this study, “disk” structure element is recommended to be employed to process the range images of laser points. The second issue is as follows: what scale of structure element should be selected for processing the range images of laser points? In real applications, if the selected scale of structure element is too large, surface details of planes and curves in the polyhedron will be absent in the generated range images; if the selected scale of structure element is too small, tiny hollows will still exist in the dilation operation results of range images of laser points for the polyhedron. Consequently, a few hollows will unavoidably exist in the final generated range images obtained by using close operation for the polyhedron.

Practical instances are employed to illustrate the second issue in detail. When mathematical morphology operation is implemented to “fandisk” model, if the scale of structure element is selected too large, range image is generated for the model and shown in Figure 8. Compared with the vertical view of “fandisk” model shown in Figure 5(a), it can be seen that gray level distribution of pixels in the region indicated by pink lines in the generated range image in Figure 8 is not consistent with the actual situation. From the vertical view of model shown in Figure 5(a), it can be found that two curves exist in the region indicated in Figure 8 in surface of the model in fact. However, since the scale of structure element is selected too large, surface details of these two curves in the indicated region are mixed together by the mathematical morphology operation. From the generated range image shown in Figure 8, only one curve can be found in the indicated region.

When the scale of structure element is selected too small, range image is generated for the “fandisk” model by using mathematical morphology operation once again. Dilation operation result of range image of laser points is obtained for “fandisk” model and shown in Figure 9(a), and close operation result of range image of laser points is obtained for “fandisk” model and shown in Figure 9(b). In Figures 9(a) and 9(b), two curves that cannot be differentiated in the region indicated by pink lines in Figure 8 can easily be distinguished from each other. However, since the scale of structure element is selected too small, some tiny hollows exist in the dilation operation result of range image of laser points for the model. An actual tiny hollow is indicated by solid yellow lines in Figure 9(a), and partially enlarged drawing for this tiny hollow is also shown in Figure 9(a) and indicated by solid red lines. From the partially enlarged drawing, it can be seen that gray level of the pixel at position of the hollow is obviously lower than gray levels of the surrounding pixels. Then, by using the erosion operation, range image is obtained for the model from the dilation operation result of range image of laser points, as shown in Figure 9(b). Due to the function of erosion operation, the hollow in dilation operation result of range image of laser points is exacerbated. In Figure 9(b), the exacerbated hollow is indicated by dotted yellow lines, and partially enlarged drawing for the exacerbated hollow is also shown in Figure 9(b) and indicated by dotted red lines. It is obvious that the obtained range image is not good enough to output as the final generation result for the model.

In this study, an iterative operation is proposed to overcome this problem. Flow chart of the proposed iterative operation is shown in Figure 10. In preliminary stage of implementation of mathematical morphology, scale of structure element is initialized as 1. Using the initialized structure element, range image of laser points is processed by dilation operation. Then, detection of pixels with local minimum gray levels is carried out on the dilation result of range image of laser points. Schematic diagram of detection principle is shown in Figure 11. In the dilation result of range image of laser points, all neighborhood regions of pixels in image of the dilation result are analyzed in turn. If gray level of pixel at center position of a region, for example, gray level of in Figure 11, is lower than gray levels of the other 8 surrounding pixels in the region, the pixel at center position of the region can be regarded as a pixel with local minimum gray level.

If pixel with local minimum gray level exists in image of the dilation result, according to the iterative operation shown in Figure 10, erosion operation will not be implemented. The scale of structure element should be added by 1, and the range image of laser points should be processed by dilation operation once again. Then, detection of pixels with local minimum gray levels is carried out on the new dilation result image. If pixel with local minimum gray level still exists in the new dilation result image, scale of structure element is added by 1 once again, and the iterative operation is constantly executed. As far as no pixel with local minimum gray level exists in the obtained dilation result image, iterative operation is finished. Subsequently, erosion operation is implemented to the newly obtained dilation result image, and the acquired image is outputted as final generated range image for the polyhedron from the selected view angle. For range images of laser points generated for the polyhedron from all the selected view angles, the aforementioned mathematical morphology operation is repeatedly implemented to obtained multi-view-angle range images for the polyhedron.

3. Experiment and Results

3.1. Test Data

In order to validate the proposed multi-view-angle range images generation method, two typical models chosen as representatives for complicated polyhedrons were used in the experiments. The first model is known as “fandisk.” This model has already been presented in the above sections. Point cloud of “fandisk” model is shown in Figure 12(a). There are 6475 laser points included in point cloud of the model. Average density of laser points obtained from surface of the model is 107 laser points per square meter. The second chosen model is known as “oil pump.” Point cloud of “oil pump” model is shown in Figure 12(b). There are 30937 laser points included in point cloud of “oil pump” model. Average density of laser points obtained from surface of the model is 166 laser points per square meter. In point clouds of the chosen models, distribution of laser points is discrete and uneven.

3.2. Experiments and Results

As presented in Section 2.3 in this paper, six view angles in 3D coordinate system, which are corresponding to negative and positive directions of -, -, and -axes, respectively, were used as the selected view angles in the proposed range image generation method. From all the selected view angles, point clouds of “fandisk” and “oil pump” models were processed by using the proposed method to generate the multi-view-angle range images.

Firstly, using (1)–(3), mean values of minimum distances of all laser points mapped into plane were calculated. Then, using (4)–(7), mean values of minimum distances of all laser points mapped into and planes were calculated as well. Side length of grids in all mapped planes, , was obtained by (8) for the models. obtained for “fandisk” model was 0.03 (m) and obtained for “oil pump” model was 0.02 (m). Using the obtained , square grids were regularly constructed in plane, plane, and plane, respectively. Subsequently, laser points were mapped into the planes in turn, and range values of grids were acquired from the range values of the mapped laser points by means of the assignment rules presented in Section 2.4. At last, by regarding the regularly constructed grids as pixels in images and regarding the range values of the grids as gray values of pixels in the images, range images of laser points were generated for point clouds of “fandisk” and “oil pump” models from all the selected view angles and shown in Figures 13(a)13(f) and 14(a)14(f), respectively.

In order to acquire desired range images for the “fandisk” and “oil pump” models, range images of laser points in point clouds of the models were processed by the proposed mathematical morphology operation. For range image of laser points generated from every selected view angle, iterative operation presented in Section 2.5 was implemented to find out the suitable scale of structure element. Then, with the aid of the close operation, range images were generated for the “fandisk” and “oil pump” models from all the selected view angles by using the structure element with suitable scales, as shown in Figures 15(a)15(f) and 16(a)16(f), respectively. Compared with 2D views of “fandisk” and “oil pump” models, which were obtained from the six selected view angles and shown in Figures 5(a)5(f) and 17(a)17(f), it can be found that distribution of gray levels of pixels in model regions in the range images generated by the proposed method is basically consistent with distribution of gray levels of pixels on surfaces of the models in the 2D views of the models. As presented in Section 2.3, gray levels of pixels on surfaces of the models in the 2D views coincided with the range values of the corresponding points on surfaces of the models from the selected view angles. Consequently, it can be deduced that gray levels of pixels in model regions in the range images generated by the proposed method are basically consistent with the range values of the corresponding points on surfaces of the models from the six selected view angles. That is to say, structure characteristics of the planes and curves on surfaces of the “fandisk” and “oil pump” models are well expressed in the multi-view-angle range images generated by using the proposed method.

In order to prove superiority of the proposed multi-view-angle range images generation method, a comparison test was carried out. The range image generation method based on single view angle and interpolation operation, which was proposed in [21], was used as reference for evaluation of the proposed method. Using this method, range image was generated from only one view angle for the complicated polyhedron with the aid of interpolation operation. According to customary rule in real applications, negative direction of -axis, that is, vertical view angle for the polyhedron in 3D space, was selected as the view angle to generate the range image. In this study, using gridding and interpolation operations, range images were generated for “fandisk” and “oil pump” models by this method from the selected view angle, as shown in Figures 2 and 18, respectively. It can be seen that texture details of the models in the generated range images are quite different from texture details of the models in the vertical views of the models, which are shown in Figures 5(a) and 17(a), respectively.

Compared with range images generated for “fandisk” and “oil pump” models by using the method based on single view angle and interpolation operation, it can be found that two advantages exist in range images generated for the models by using the proposed method. The first advantage is as follows: since five extra view angles were selected as the view angles to generate the range images, planes and curves on bottoms and flanks of the “fandisk” and “oil pump” models were expressed in the generated range images, as shown in Figures 15(b)15(f) and 16(b)16(f). The second advantage is as follows: since a novel iterative operation of mathematical morphology was introduced in the proposed method, range images generated from the selected view angles for the “fandisk” and “oil pump” models by using the proposed method are more satisfactory and more consistent with the reality than range images generated from the selected view angle for the models by using the method based on single view angle and interpolation operation.

To further validate the quality and effectiveness of the proposed multi-view-angle range images generation method, a quantitative comparison was performed between range images generated by the proposed method and range images generated by the method based on interpolation operation. The method proposed in [21] was used as a representative of the methods based on interpolation operation. Using this method, from all six selected view angles, range images were generated for “fandisk” and “oil pump” models and shown in the first column of Figure 19(a) and in the first column of Figure 19(b), respectively. Correspondingly, range images generated by using the proposed method are shown in the third column of Figure 19(a) and in the third column of Figure 19(b), respectively. From the 2D views of “fandisk” and “oil pump” models, which are shown in Figures 5 and 17, gray level images were extracted and shown in the second column of Figure 19(a) and in the second column of Figure 19(b), respectively. Images located in the same row in Figure 19(a) or in Figure 19(b) have an identical resolution ratio. In this study, gray level images extracted from the 2D views of models were utilized as the standard images to evaluate the qualities of range images generated by using the different methods.

Mean values of gray level deviations between pixels in generated range images and the pixels in gray level images extracted from 2D views of models were calculated. For example, gray value of the th pixel in the range image generated by the proposed method from the vertical view for the “fandisk” model, which is located in the third column and the first row of Figure 19(a), is denoted as , and the resolution ratio of the image is , that is, . Gray value of the th pixel in the gray level image extracted from vertical 2D view of the model, which is located in the second column and the first row of Figure 19(a), is denoted as . Resolution ratio of this image is as well. Mean value of gray level deviations between and is denoted as and can be obtained asGray value of the th pixel in the range image generated by the interpolation-based method proposed in [21] from the vertical view for the “fandisk” model, which is located in the first column and the first row of Figure 19(a), is denoted as , and the resolution ratio of the image is as well. Mean value of gray level deviations between and () is denoted as and can be obtained asUsing this method, mean values of gray level deviations between pixels in all generated range images and the pixels in gray level images extracted from 2D views of models were all calculated and listed in Tables 1 and 2, respectively.

In the field of digital image processing, Hausdorff distance is a classical parameter to describe the extent of dissimilarity between two images [28]. In this study, Hausdorff distances between the generated range images and gray level images extracted from 2D views of models were calculated and listed in Tables 1 and 2, respectively. Furthermore, to evaluate the computation complexity of the proposed method, computation times consumed by using the different generation methods were recorded and listed in Tables 1 and 2 as well. In this paper, all the experiment results were acquired by using the same computer with a 2.2 GHz quad-core central processing unit, 16 GB random access memory, and 64-bit MATLAB software.

From Tables 1 and 2, it can be seen that both of the mean values of gray level deviations and Hausdorff distances obtained from the generation results of the proposed method are smaller than those obtained from the generation results of the interpolation-based method proposed in [21]. This demonstrates that range images generated by the proposed method are more consistent with the gray level images extracted from 2D views of the models, which are used as standard images in the experiments. That is to say, quality of range images generated by the proposed method is higher than quality of range images generated by the interpolation-based method. Consequently, effectiveness of the proposed method is verified. From Tables 1 and 2, it can be seen that the computation times consumed by using the proposed method are shorter than those consumed by constructing 2D views of models from point clouds. Although the computation times consumed by using the proposed method are a little longer than those consumed by using the interpolation-based method, the proposed method still deserves to be recommended owing to the conspicuous effectiveness of it.

4. Discussion

Generally, distribution of laser points in point cloud is discrete and anomalous. For complicated polyhedron in 3D space, a holonomic point cloud is hard to be acquired from a certain view angle. In real applications, multisite scanning mode is often used to get the holonomic point cloud for polyhedron. Since the holonomic point cloud is obtained by combining point clouds acquired from multiple sites, distribution of laser points in the obtained holonomic point cloud is usually even more anomalous. It is very difficult to directly extract surface structure information of the polyhedron from the holonomic point cloud. Consequently, an effective estimation of surface structure for the polyhedron is quite necessary in preliminary stage of high-precision structure measurement of the polyhedron.

In real applications, in order to rapidly extract surface structure information of the complicated polyhedron from the holonomic point cloud, range image of the polyhedron should be generated from the holonomic point cloud in advance. Then, with the aid of the advanced digital image processing technology, surface structure information of the polyhedron is effectively estimated from the generated range image. Subsequently, on the basis of the estimated information, point cloud of the polyhedron can be finely classified and analyzed in detail. It is obvious that quality of the generated range image has a great influence on accuracy of the final structure measurement result for the polyhedron.

As aforementioned, existing range image generation methods mainly aim to generate range image for polyhedron from only one view angle with the aid of gridding and interpolation operations. Since only one view angle is selected to generate range image for the polyhedron, if planes or curves on surface of the polyhedron are concealed by other planes or curves on surface of the polyhedron from the selected view angle, the structure information of concealed planes or curves cannot be effectively expressed in the generated range image. Furthermore, the generation results in overlapped regions of concealing and concealed planes or curves in the selected view angle will be disturbed by position information of laser points in the concealed planes or curves on surface of the polyhedron. From range images generated by using the interpolation-based method for “fandisk” and “oil pump” models, it can be seen that structure information of concealed planes and curves on surface of models is absent in the generated range images and a great number of unreasonable anomalous regions or hollows exist in the generated range images.

In multi-view-angle range images generation method proposed in this paper, two improvements are included. Concept of three-view drawing in mechanical cartography is introduced into the range image generation procedure for complicated polyhedron in 3D space. Negative and positive directions of -, -, and -axes are selected as view angles to generate multi-view-angle range images for the polyhedron. Furthermore, a novel iterative mathematical morphology operation is proposed to generate high-quality range images for the polyhedron in this study. By means of the proposed iterative operation, suitable scales of structure element are obtained for the close operation of mathematical morphology. Satisfactory range images can be generated for the polyhedron from all the selected view angles by using the proposed method. From range images generated for “fandisk” and “oil pump” models by using the proposed method, it can be seen that all planes and curves on surface of the models appear in the generated range images and distribution of gray levels of pixels in regions of models in the generated range images is consistent with the expectation. It can be deduced that structure information of planes and curves on surface of the models can be efficiently estimated by using the appropriate image processing techniques from the generated range images. In real applications, the estimation results will play a very important role in high-precision structure measurement of the models.

A simple instance is employed to analyze whether the high-quality structure information can be estimated from the range images generated by the proposed method. It is known that Canny operator [29] can be utilized to detect the boundary lines from the images. In this paper, range images and gray level image generated for “oil pump” from the vertical view, which are shown in the first row of Figure 20, are processed by using the Canny operator. Boundary lines of planes and curves on surface of the model, which are detected from the gray level image obtained from 2D view of model, are shown in the first column and the second row of Figure 20. It can be seen that the majority of boundary lines of planes or curves on surface of the model are properly extracted from the gray level image obtained from 2D view of model. Boundary lines of planes and curves on surface of the model, which are detected from the range image generated by using the proposed method, are shown in the second column and the second row of Figure 20. It can be seen that boundary lines extracted from the range image generated by using the proposed method are basically consistent with boundary lines extracted from the gray level image obtained from 2D view of model. Using these extracted boundary lines as the prior knowledge, high-precision and high-speed point cloud classification and segmentation can be achieved in all probability. Then, high-precision structure characteristic parameters can be obtained from the classification and segmentation results. Boundary lines of planes and curves on surface of the model, which are detected from the range image generated by using the interpolation-based method, are shown in the third column and the second row of Figure 20. It can be seen that many important boundary lines of the model are absent in the detection results, and a lot of false boundary lines are extracted due to hollow regions in the range image. Using such kind of boundary line extraction results as the prior knowledge, high-precision and high-speed point cloud classification and segmentation will be very difficult to be achieved. As a result, it can be deduced that the proposed method is more promising than the interpolation-based method in the related fields of point cloud processing. In our future work, high-precision structure measurement of complicated polyhedron with the aid of range images generated for the polyhedron by using the proposed method will be further studied.

5. Conclusion

In this paper, a new kind of method is proposed to generate multi-view-angle range images for complicated polyhedron in 3D space. In the proposed method, concept of three-view drawing in mechanical cartography is introduced. Negative and positive directions of -, -, and -axes are selected as view angles to generate range images for the complicated polyhedron. Furthermore, by means of the gridding operation and the proposed iterative mathematical morphology operation, satisfactory range images are generated for the complicated polyhedron from all the selected view angles. In the experiments, the comparison tests between the proposed generation method and the method based on single view angle and interpolation operation were carried out on point clouds of two typical models. The obtained experimental results show that the proposed multi-view-angle range images generation method is more effective and promising than the generation method based on single view angle and interpolation operation.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors gratefully acknowledge the financial support from the National Natural Science Foundation of China (nos. 61501394, 61601399, and 51305390) and Natural Science Foundation of Hebei province of China (nos. F2016203155 and F2016203312).