Artificial Intelligence Techniques for Joint Sensing and Localization in Future Wireless NetworksView this Special Issue
Intelligent Point Cloud Edge Detection Method Based on Projection Transformation
An edge detection method based on projection transformation is proposed. First, the vertical projection transformation is carried out on the target point cloud. Data and data are normalized to the width and height of the image, respectively. Data is normalized to the range of 0-255, and the depth represents the gray level of the image. Then, the Canny algorithm is used to detect the edge of the projection transformed image, and the detected edge data is back projected to extract the edge point cloud in the point cloud. Evaluate the performance by calculating the normal vector of the edge point cloud. Compared with the normal vector of the whole data point cloud of the target, the normal vector of the edge point cloud can well express the characteristics of the target, and the calculation time is reduced to 10% of the original.
As a key automation technology, machine vision is very important to the modernization of the economy. Machine vision has been widely studied by scholars. Machine vision uses machines to measure the size of target or detect the surface of target instead of eyes. Machine vision mainly uses computers to simulate the function of human visual and reproduce certain intelligent behaviors related to human vision. Information is extracted from the image of an objective object, processed and understood, and finally used for practical detection and control. Machine vision started from statistical pattern recognition in the 1950s. The main work is focused on two-dimensional image analysis, recognition, and understanding. In recent years, various noncontact research results emerged [1–4]. Machine vision in the industrial field can be divided into four aspects. Surface detection is always used in product quality inspection and product classification. A camera and a robot are combined to package products. Feature detection is always used in robot positioning. Civil Machine Vision Technology is widely used in intelligent transportation, safety protection, character recognition, identity verification, medical equipment, etc. In the field of scientific research, machine vision can be used for material analysis, biological analysis, chemical analysis, and life science. In the military field, it can be used in aerospace, aviation, weapons, and mapping. Its technology mainly includes image processing, mechanical engineering, control, and optical imaging.
With the rapid development of 3D acquisition technology, 3D sensors are becoming more available and affordable, including various types of 3D scanners, LiDAR, and RGB-D cameras (such as Kinect, RealSense, and Apple depth cameras). The 3D data from these sensors can provide rich geometry, shape, and scale information. Complementing 2D images, 3D data provides an opportunity to better understand the environment around the machine. 3D data can often be represented in different formats, including depth images, point clouds, grids, and volumetric grids. As a common format, the point cloud representation preserves the original geometry in 3D space without any discretization. It is the preferred notation for understanding related applications in many scenarios. 3D point cloud detection is widely researched by scholars. Ali et al.  build on the success of the one-shot regression meta-architecture in the 2D perspective image space and extend it to generate oriented 3D object bounding boxes from LiDAR point cloud. Zhou et al.  remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single-stage, end-to-end trainable deep network. Meyer et al.  present LaserNet, a computationally efficient method for 3D object detection from LiDAR data for autonomous driving. Beltran et al.  present a LiDAR-based 3D object detection pipeline entailing three stages. Minemura et al.  employ dilated convolutions to gradually increase the perceptive field as depth increases; this helps to reduce the computation time by about 30%. Asvadi et al.  address the problem of vehicle detection using Deep Convolutional Neural Network (ConvNet) and 3D-LIDAR data with application in advanced driver assistance systems and autonomous driving. Propose a vehicle detection system based on the Hypothesis Generation (HG) and Verification (HV) paradigms. Simon et al.  propose a specific Euler-Region-Proposal Network (E-RPN) to estimate the pose of the object by adding an imaginary and real fraction to the regression network.
Edge detection is a key technology to detect the target . Most of the information of the image exists in the edge of the image, which is mainly represented by the discontinuity of the local features of the image. Edge detection is first proposed for a two-dimensional digital image; the purpose is to identify and detect the position where the image characteristics change . Point cloud edge refers to some edge measurement points that can express the target features. Point cloud edge can not only express the geometric characteristics of the object but also play an important role in the quality and accuracy of object recognition and surface model reconstruction [14, 15]. As an important research field of image analysis and computer vision, edge detection has attracted the attention of many scholars. A variety of mature edge detection algorithms have been developed.
Different point cloud data models have different edge feature extraction methods, which can be roughly divided into grid-based and scattered point cloud-based feature extraction methods [16, 17]. In feature extraction based on mesh, firstly, the point cloud is gridded, and then, the edge features of the point cloud are obtained by traversing the triangulated point cloud and threshold constraints. Among them, the most famous algorithm of Delaunay is simple and intuitive. But in the process of triangulation, we need to evaluate the Euclidean distance between point clouds. If the Euclidean distance is not suitable, holes will be generated. In addition, if the method is applied to three-dimensional point clouds, it needs to use the normal direction of each point cloud to determine the projection direction, so the algorithm is more suitable for uniform and smooth point clouds. The feature extraction based on scattered point cloud mainly extracts some regular points, lines, surfaces, and other features from this type of point cloud, so it pays more attention to local features. Song et al.  take the vector of each point in the point cloud and the vector of adjacent points as the root mean square as the standard of edge feature extraction, although this method well reflects the relationship between the normal direction of each point in the point cloud and its adjacent points, the nonedge points adjacent to the edge will be detected in the result of edge extraction. Han et al.  keep the edge feature by using the feature that the normal direction of the boundary point is different from the nonboundary normal direction, but the density of the edge is the same as that of the nonedge part. Chen et al.  proposed a feature extraction algorithm with multiparameter constraints. Feature points are determined by normal, curvature, and Euclidean distance. In [21, 22], principal component analysis (PCA) and normal method were used to extract edge feature points.
Referring to the edge detection algorithm of two-dimensional image, this paper proposes a Canny operator based on projection transformation for edge detection of point cloud data and obtains the normal vector of the edge point cloud after edge detection. The point cloud can better reflect the characteristics of the target, and the speed of solving the normal vector is greatly improved. Compared with the normal vector of the whole data point cloud of the target, the normal vector of the edge point cloud can well express the characteristics of the target, and the calculation time is reduced to 10% of the original.
2. Methodology Vertical Projection of Point Data
Projection transformation is the process of transforming the coordinates of one map projection point into the coordinates of another map projection point. 3D point cloud is a massive set of points that express the spatial distribution and surface characteristics of targets in the same spatial reference system. It is a collection of points after obtaining the spatial coordinates of each sampling point on the object surface. Compared with a 2D image, 3D point cloud usually only has coordinate information. Its spatial information is redundant to a 2D image. The corresponding geometric structure is more complex, and the neighbour structure of point cloud data is more complex.
In this paper, the Canny edge detection algorithm based on projection transformation is proposed to detect the edge of point cloud data. The point cloud data is projected along the vertical direction to the XY two-dimensional plane, and the projected point cloud data is normalized. The direction represents the width, the direction represents the height, and the value represents the gray value of the pixel in the image. The edge of the transformed data is detected, and then, the final edge point cloud is obtained by inverse transformation.
Supposing the number of point clouds is , all point clouds are represented as
Here, is the three-dimensional coordinates of point .
The point cloud data is vertically projected in the direction, and the value is converted into the depth value of the current point. The projected point set is expressed as
For the projected point set, the maximum value and minimum value of and directions are counted, which are marked as follows: , , , and . The axis data is quantized into a form corresponding to the image width and height . Then, the abscissa and ordinate of are as
A linear transformation is performed on the matrix . In order to realize the corresponding relationship between point cloud and image, statistically, the minimum and maximum values of are marked as follows: and . The value is linearly transformed to the range of 100~255. For a coordinate where there is no point cloud, it is represented by 0. The value is like the gray value of an image. The projection transformation is shown in
The point set after quantization is expressed as
As an example, the industrial part target is vertically projected, and the projection result is shown in Figure 1.
(a) Original target of tee
(b) Point cloud of tee
(c) Vertical projection of tee
(d) Original target—elbow
(e) Point cloud of elbow
(f) Vertical projection of elbow
Figures 1(a) and 1(d) are the original target of tee and elbow. Figures 1(b) and 1(e) are the point cloud of tee and elbow. Figures 1(c) and 1(f) are the vertical projection of tee and elbow. Because the angle of view is the direction when shooting the target, the data shape of the target point cloud data after vertical projection in the direction is consistent with the original image features.
3. Edge Detection with Canny Operator
Canny edge detection was first proposed by John Canny in the paper with a computational approach to edge detection in 1986. Canny edge detection is a technology to extract useful structural information from different visual objects and greatly reduce the amount of data to be processed. It has been widely used in various computer vision systems. Canny found that the requirements of edge detection in different vision systems are similar, so it can achieve a widely used edge detection technology. The Canny algorithm is based on three basic objectives. (1) In the low error rate, all edges should be found with no pseudoresponse. Capture as many edges as possible in the image as accurately as possible. (2) The detected edge should be accurately located in the center of the real edge. (3) In single edge point response, the detector should not point out multiple pixel edges where there is only one single edge point. In order to meet these requirements, Canny uses the variation method. The optimal function in Canny detector is described by the sum of four exponential terms, which can be approximated by the first derivative of Gaussian function. The Canny edge detection algorithm can be divided into the following five steps. (1) Gaussian filter is used to smooth the image and remove the noise. (2) Calculate the gradient intensity and direction of each pixel point in the image. (3) Nonmaximum suppression is applied to eliminate the spurious response caused by edge detection. (4) Double threshold detection is applied to determine real and potential edges. (5) Finally, the edge detection is completed by suppressing the isolated weak edge.
3.1. Gaussian Smoothing
Gaussian smoothing is a 2D convolution operation, which is applied to blurred images to remove details and noise. In order to reduce the influence of noise on the result of edge detection as much as possible, the noise must be filtered to prevent false detection caused by noise. In order to smoothen the image, Gaussian filter is used to convolute the image. In this step, the image is smoothed to reduce the obvious noise effect on the edge detector. The generation equation of Gaussian filter kernel with size of is given by
Here, is the standard deviation of the distribution . Assume that the mean of the distribution is 0; that is, its center is on the line . The distribution of two-dimension Gaussian filter kernel is shown in Figure 2.
When , , and the size of Gaussian filter kernel is , the corresponding Gauss kernel is shown in
When , , and the size of Gaussian filter kernel is , the corresponding Gauss kernel is shown in
When , , and the size of Gaussian filter kernel is , the corresponding Gauss kernel is shown in
When , , and the size of Gaussian filter kernel is , the corresponding Gauss kernel is shown
The choice of Gaussian convolution kernel size will affect the performance of Canny detector. The larger the size, the lower the sensitivity of the detector to noise, but the positioning error of edge detection will increase slightly.
If a window in the image is and the Pixel to be filtered is , then after Gauss filtering, the value of pixel is shown in
In equation (12), is a convolution symbol.
3.2. Calculate the Intensity and Direction of the Gradient
Using a discrete difference operator, convolution operation is carried out from the -axis and -axis, respectively. The gray change value and direction in the horizontal and vertical directions are obtained. Determine the gradient amplitude and direction with
Here, and are the first derivatives of horizontal and vertical directions, respectively. Suppose and are the Sobel operators which are shown in where is the Sobel operator in the direction, which is used to detect the edge in the direction and is the Sobel operator in the direction, which is used to detect the edge in the direction (edge direction is perpendicular to gradient direction). and are the convolution of and the image data. The calculation formula is shown in
3.3. Nonmaximum Suppression
Nonmaximum suppression is a kind of edge sparsity technology. The effect of nonmaximum suppression lies in the “thin” edge. After calculating the gradient of the image, the edge extracted only based on the gradient value is still very fuzzy. The edge should have only one accurate response. Nonmaximum suppression can help to suppress all gradient values except local maximum to 0. The algorithm of nonmaximum suppression for each pixel in gradient image is as follows. (1) The gradient intensity of the current pixel is compared with two pixels along the positive and negative gradient direction. (2) If the gradient intensity of the current pixel is the largest compared with the other two pixels, the pixel will remain as the edge point; otherwise, the pixel will be suppressed.
3.4. Double Threshold Detection and Suppress Isolated Low Threshold Points
In order to delete the edge pixels that are caused by noise, delete the edge pixels with weak gradient, and retain edge pixels with high gradient through selecting high and low thresholds. The gradient of weak edge pixel is less than the high threshold and greater than the low threshold that should be suppressed.
Generally, weak edge pixels caused by real edges are connected to strong edge pixels, but noise response is not connected, in order to track the edge connection, by looking at the weak edge pixel and its eight neighbours. When one of them is a strong edge pixel, the weak edge point can be retained as a real edge.
Edge detection is performed on the image after the point cloud is vertically projected, and the result is shown in Figure 3.
(a) Vertical projection of tee
(b) Edge of tee
(c) Vertical projection of elbow
(d) Edge of elbow
Figures 3(a) and 3(c) are the vertical projection images of tee and elbow. Figures 3(b) and 3(d) are the edges of vertical projection images. Through edge detection, the edge feature of target can be detected. Compared with the target point cloud, the edge feature can represents the target very well and the point number is reduced greatly.
4. Back Projection Transform of Edge Detection Data
In order to express the three-dimensional information of edge points, the edge detection data is back projected to the origin cloud. The points in the edge map after projection transformation of point cloud are represented as , in which is the horizontal ordinate. The value range is . is the vertical ordinate. The value range is . When , that means the point is the edge point. The corresponding point in point cloud should be found. First, calculate the and of point cloud. The transformation method is as follows:
In the origin point cloud data, search the horizontal ordinate and vertical ordinate. Then, mark the points as edge points. The search code is as follows:
The effect of the edge point cloud after the inverse transformation of the edge points is shown in Figure 4, in which gray represents the target and red represents the edge point cloud.
(a) Point cloud of tee
(b) Edge point cloud of tee
(c) Point cloud of elbow
(d) Edge point cloud of elbow
Figures 4(a) and 4(c) are the point cloud of tee and elbow. Figures 4(b) and 4(d) are the edge point cloud of them. Calculate the number of original point and edge point. The number is shown in Table 1.
The edge point of each target is around 10% of the original point. After extracting the edge points, it can provide a reliable basis for the subsequent point cloud normal vector calculation and point cloud registration.
5. Normal Vector Calculation of Edge Points Based on PCA
After edge detection, the normal vector of the local fitting plane is taken as the normal vector of the point. Suppose the edge point is , -Neighbourhood search in origin cloud dataset. Calculate the best fit plane . Here, is the normal vector of a plane equation. represents the distance from the origin to the plane, , and . The distance from each point to the plane is . The sum of the distance from each point to the best fit plane is the smallest, that is,
The normal vector estimation of the edge point cloud is shown in Figure 5.
(a) Point cloud of tee
(b) Normal vector of point cloud of tee
(c) Edge point cloud of tee
(d) Normal vector of edge point cloud of tee
(e) Point cloud of elbow
(f) Normal vector of point cloud of elbow
(g) Edge point cloud of elbow
(h) Normal vector of edge point cloud of elbow
Because the number of points corresponding to edge features is greatly reduced, the calculation time of using edge features to calculate normal vector is also greatly shortened. The calculation time is shown in Table 2.
It can be seen from the figure that compared with the normal vector of all data point clouds of the target, the normal vector of the edge point cloud can well express the characteristics of the target, and the processing time of the tee target is reduced from 53.03 s to 4.94 s, and the elbow target is reduced from 42.03 s to 4.37 s, which is around 10% of the original.
The data of point cloud is large and contains a lot of invalid information. It is very important to extract the characteristics of point cloud. Edge features can well express the geometric features of the target, so it is very important to extract edge point cloud. This paper proposes an edge detection algorithm based on projection transformation. Firstly, the target point cloud is projected vertically. Then, the Canny algorithm is used to detect the edge of the image. The detected edge data is back projected to extract the edge point cloud. By calculating the normal vector of the edge point cloud, compared with the normal vector of the whole data point cloud of the target, the normal vector of the edge point cloud can well express the characteristics of the target, and the calculation time is reduced to 10% of the original, which greatly saves the calculation time. This paper only detects 3D tee and elbow. Give the advantages of edge features in normal vector calculation. The advantages of edge point cloud in point cloud matching need to be further studied.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
This paper is supported by Development and Reform Commission of Jilin Province (2020C018-3) and Jilin Provincial Department of Education (JJKH20210726KJ).
S. W. Lee, S. Sarp, D. J. Jeon, and J. H. Kim, “Smart water grid: the future water management platform,” Desalination and Water Treatment, vol. 55, no. 2, pp. 339–346, 2015.View at: Publisher Site | Google Scholar
J. Zhang, J. J. Cao, X. Liu, H. Chen, B. Li, and L. Liu, “Multi-normal estimation via pair consistency voting,” IEEE Transactions on Visualization and Computer Graphics, vol. 25, no. 4, pp. 1693–1706, 2019.View at: Publisher Site | Google Scholar
W. Dong, “Building point cloud feature extraction using geometric features of adjacent points,” Laser and Optoelectronics, vol. 55, no. 7, pp. 181–188, 2018.View at: Google Scholar
J. Zhu, J. P. Huang, and L. M. Wang, “Laser printing files detection method based on double features,” International Journal of Pattern Recognition and Artificial Intelligence., vol. 32, no. 10, p. 1854028, 2018.View at: Publisher Site | Google Scholar
W. Ali, S. Abdelkarim, and M. Zidan, “Yolo3d: end-to-end real-time 3d oriented object bounding box detection from LiDAR point cloud,” in Lecture Notes in Computer Science, pp. 716–728, Springer, Cham, 2019.View at: Google Scholar
Y. Zhou and O. Tuzel, “Voxelnet: end-to-end learning for point cloud based 3d object detection,” 2017, https://arxiv.org/abs/1711.06396.View at: Google Scholar
G. P. Meyer, A. Laddha, E. Kee, C. Vallespi-Gonzalez, and C. K. Wellington, “LaserNet: an efficient probabilistic 3D object detector for autonomous driving,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019.View at: Google Scholar
J. Beltran, C. Guindel, F. M. Moreno, D. Cruzado, F. Garcia, and A. De La Escalera, “BirdNet: a 3D object detection framework from LiDAR information,” in 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 2018.View at: Google Scholar
K. Minemura, H. Liau, A. Monrroy, and S. Kato, “LMNet: real-time multiclass object detection on CPU using 3D LiDARs,” in 2018 3rd Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), Singapore, 2018.View at: Google Scholar
A. Asvadi, L. Garrote, C. Premebida, P. Peixoto, and U. J. Nunes, “DepthCN: vehicle detection using 3d-lidar and convnet,” in 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 2017.View at: Google Scholar
M. Simon, S. Milz, K. Amende, and H. M. Gross, “Complex-yolo: real-time 3d object detection on point clouds,” 2018, https://arxiv.org/abs/1803.06199.View at: Google Scholar
S. H. B. Xia and W. R. SH, “A fast edge extraction method for mobile LiDAR point clouds,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 8, pp. 1288–1292, 2017.View at: Publisher Site | Google Scholar
C. H. J. Ding, G. Sun, and L. L. Yin, “Boundary extraction of scattered point cloud,” Computer Technology and Development, vol. 27, no. 7, pp. 83–86, 2017.View at: Google Scholar
J. X. Li, E. Q. Wu, and Y. L. Ke, “3D reconstruction of small diameter pipes inner surface based on structural light,” Chinese Journal of Scientific Instrument, vol. 27, no. 3, pp. 254–258, 2006.View at: Google Scholar
J. Ye, Z. Gao, X. Liu, W. Wang, and C. Zhang, “Freeform surfaces reconstruction based on Zernike polynomials and radial basis function,” Acta Optical Sinica, vol. 34, no. 8, pp. 0822003–0822241, 2014.View at: Publisher Site | Google Scholar
J. Z. Zhou and Y. J. Yan, “Research on the feature extraction technology of the point cloud in reverse engineering,” Equipment Manufacturing Technology, vol. 8, no. 13-17, p. 33, 2019.View at: Google Scholar
Z. H. X. Duan, Research on data reduction and surface reconstruction of 3D laser scanning and its application, China University of mining and technology, 2019.
H. Song, H. Y. Feng, and D. S. Ouyang, “Automatic detection of tangential discontinuities in point cloud data,” Journal of Computing and Information Science in Engineering, vol. 8, no. 2, pp. 1–10, 2008.View at: Publisher Site | Google Scholar
H. Y. Han, X. Han, and S. F. SH, “Point cloud simplification with preserved edge based on normal vector,” International Journal for Light and Electron Optics, vol. 126, no. 19, pp. 2157–2162, 2015.View at: Publisher Site | Google Scholar
L. Chen, Y. Cai, and J. S. H. Zhang, “Feature extraction of scattered point cloud based on hybrid method of multiple discriminant parameters,” Computer Application Research, vol. 34, no. 9, pp. 2867–2870, 2017.View at: Google Scholar
T. Hackel, J. D. Wegenr, and K. Schindler, “Contour detection in unstructured 3D point clouds,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1610–1618, 2016.View at: Google Scholar
S. H. Y. Pei, N. Du, and L. Wang, “Feature extraction of building point cloud based on moving least squares normal vector estimation,” Bulletin of Surveying and Mapping, vol. 4, pp. 73–77, 2018.View at: Google Scholar