Advances in Multimedia

Volume 2018, Article ID 4189125, 9 pages

https://doi.org/10.1155/2018/4189125

## Height Estimation of Target Objects Based on Structured Light

Department of Modern Education Technology, Ludong University, Yantai, China

Correspondence should be addressed to Wei Liu; moc.anis@wludl

Received 20 June 2018; Accepted 25 September 2018; Published 1 November 2018

Guest Editor: Shengping Zhang

Copyright © 2018 Wei Liu and Yongsheng Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The height estimation of the target object is an important research direction in the field of computer vision. The three-dimensional reconstruction of structured light has the characteristics of high precision, noncontact, and simple structure and is widely used in military simulation and cultural heritage protection. In this paper, the height of the target object is estimated by using the word structure light. According to the height dictionary, the height under the offset is estimated by the movement of the structured light to the object. In addition, by effectively preprocessing the captured structured light images, such as expansion, seeking skeleton, and other operations, the flexibility of estimating the height of different objects by structured light is increased, and the height of the target object can be estimated more accurately.

#### 1. Introduction

In recent years, with the development of science and technology, three-dimensional reconstruction technology as an important part of machine vision has attracted more and more attention, especially in industrial product design and cultural heritage protection. However, based on the three-dimensional reconstruction of the surface of the structured light, it is possible to reconstruct the surface of the object by laser scanning without touching the object, which can greatly protect the original culture from damage in the cultural heritage. This can make a great contribution to the excavation of ancient excellent culture and the spread of Chinese civilization. Therefore, the three-dimensional reconstruction based on structured light has important practical significance for the protection of cultural heritage and the design of industrial products [1–3]. At present, in the three-dimensional reconstruction of the main use of word-structured optical scanning method and three-dimensional reconstruction of structured light technology as a noncontact active measurement technology, with low cost, high precision, vision, real-time, anti-interference ability, and so on, these characteristics will inevitably make the next few years of this reconstruction will have a better development prospects [4, 5].

3D surface reconstruction is to rebuild the actual shape of the real life of the object, which has become an important topic in computer vision. And researchers from all over the world have made considerable achievements in this regard. The structure of three-dimensional reconstruction system for structured light mainly includes cameras and lasers; you can use the ordinary camera to complete the task of detection, but because of the different structure of the light, the experimental results will be affected. According to the laser projection of different ways can be divided into point, line, and multiline structure of light. Point structure light for the laser projector projection of a beam of light, measured the surface of the measured object a point; the camera can only get this photo of the three-dimensional coordinates of the information; the amount of information is too small; a word line structure light projector projects a light plane; the intersection of the light plane and the measurement object can draw a cross section information; the algorithm is easy to use; multirow structured light projects multiple light planes; the surface of the object forms multiple laser lines; pictures can give us multiple cross-section information, which is large amount of information; however, it is necessary to increase the matching of light bars, which greatly improves the difficulty and complexity of the algorithm and is still in the stage of experimental research [6–8].

At home and abroad for the 3D surface reconstruction conducted in-depth study, Horn [9] proposed the concept of SFS, which is a widely concerned three-dimensional shape reconstruction of the important ideas. The main content of this idea is to reconstruct the three-dimensional shape of the surface of the object by identifying and analyzing the shape information of the direction of the light, the brightness, the surface shape of the object, and the grayscale variation of the reflection model. Ikeuchi and Horn [10] are used to solving the three-dimensional reconstruction by using the illuminance equation and the smoothing criterion as the constraint of reconstruction. So, the problem of 3D reconstruction is transformed into the minimization problem of solving function. In this case, Horn proposed another smoothing standard. The main content is a smooth surface, where the surface obeys the integrable constraint, because the algorithm is seeking to directly recover complex surface unit normal vectors, so that reconstruction cannot get the absolute height of the surface. Fuqiang Zhou [11] used to use the same way to achieve the cross-laser plane calibration. In the experiment, four edge feature points of the space disk are obtained and the radius of the disk is calculated by fitting the feature points. The absolute error of the radius is 0.0_59mm. Harbin Institute of Technology Dongbin Zhao [12] and other scholars put forward a new monocular image restoration object surface height and gradient algorithm is an iterative calculation of the composite image, obtaining accurate surface height, and they also validate the feasibility of the algorithm for actual solder joint images. Ruiling Liu [13], for high light and shadow, put forward a four-light source vector selection algorithm; she compares the normal vector of different pixels recovery with the mirror reflection direction and chooses the nearest normal vector to restore the shape of the required vector, which avoids the error caused by the threshold elimination of high light and shadow in the traditional algorithm, and remove the high light and shadow constraints on the algorithm to expand the scope of application of the algorithm.

#### 2. Based on the Gradient of Moving Objects Detection

In the process of 3D reconstruction of structured light, in order to reconstruct the 3D structure of the 2D image taken by the camera, the camera parameter must be calibrated and the geometric model of camera imaging should be built; that is, the camera’s internal and external parameters should be measured. Then, the correspondence between the image and the spatial point is constructed; that is, the laser plane equation is calibrated. This paper mainly used Zhenyou Zhang camera calibration method [14].

##### 2.1. Camera Parameter Calibration

The camera model is very similar to the model used by Heikkila and Silven of the University of Oulu in Finland. We especially recommend their CVPR'97 paper: the function of the four-step camera calibration program with an implicit image correction [15].

In the camera model, the parameters are as follows:

Focal length: stored in pixels in 21vector .

Main points: the coordinates of the primary point are stored in the 21 vector .

Skew factor: define the skew factor for the angle between the and axes in the scalar .

Distortion: the image distortion factor (radial and tangential distortion) is stored in the 51 vector .

Let be the spatial point of the coordinate vector in the reference frame of the camera. And then the projection is performed on the image plane based on the intrinsic parameter .

Let be normalized (pinhole) image projection:

Let ; after the lens is distorted, the new normalized point coordinates are defined:where is the tangential distortion vector:

Therefore, contains the radial, tangential distortion coefficient [16]. It is worth noting that this distortion model was first introduced by Brown in 1966, called the “Plumb Bob” model (radial polynomial + “thin prism”). The tangential distortion is due to the incorrect alignment of the “eccentric” in the composite lens or other manufacturing defects of the lens assembly.

Once the distortion is applied, the final pixel coordinates of the P on the projection plane are :

Thus, the pixel coordinate vector and the normalized (distorted) coordinate vector are related to each other by a linear equation [17]:where is called the camera matrix and is defined as follows:

##### 2.2. External Condition Variable Setting

In reconstructing the surface height of the object, we are using triangles similar to the surface reconstruction of the object. In the case of similar judgments, the degree of use of the triangles is the same. But when the triangles are similar, we use the same degree of the angle of the two similar triangles [18]. Thus, in the world coordinate system with the intersection of the center of view and the center of the object, there exists a proportional relationship between the tangent values of the corners in the two right angles, so that the degree of the angle and the distance between the optical center and the object need to be known. These two variables can change the position of the camera structure by artificial changing. Therefore, when setting up the camera and the structure of the light we need to measure the angle and length . And these two are invariants; that is to say in the whole process of shooting we must ensure that the two variables remain unchanged or reconstructed objects will be distorted. So, in the process of taking pictures we must ensure that the external variables remain unchanged, so as to better reconstruct the surface of the object.

#### 3. Basic Principle of Three-Dimensional Reconstruction of Structured Light

In order to obtain the three-dimensional information of the object in the structured light measurement, the basic idea is to use the geometric information in the structured light image to help provide the geometric information in the scene [19]. According to the geometric relationship inside the camera, we can determine the structure of light and the geometric relationship between objects, thus rebuilding the surface of the object.

##### 3.1. The Correspondence between Pixels and a World Coordinate Point

As shown in Figure 1, the angle between the structural smooth and the optical axis of the camera is , and the origin of the world coordinate system is located at the intersection of the camera's optical axis and the structured light plane. The -axis and the -axis are parallel to the camera coordinate systems and , respectively, and and coincide but are opposite [20]. The distance between and is . Thus, the world coordinate system and the camera coordinate system have the following relationship: