- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Table of Contents

Advances in Mechanical Engineering

Volume 2013 (2013), Article ID 587904, 6 pages

http://dx.doi.org/10.1155/2013/587904

## Reference Sphere Positioning Measurement Based on Line-Structured Light Vision Sensor

State Key Laboratory of Precision Measuring Technology & Instrument, Tianjin University, Tianjin 300072, China

Received 28 June 2013; Accepted 3 September 2013

Academic Editor: Fuqiang Zhou

Copyright © 2013 Bin Wu and Yuan Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The line-structured light vision sensor has been used widely in industrial vision measuring fields due to its simple structure, small volume, light weight, low cost, convenient calibration, and high accuracy of measurement. To locate the reference sphere precisely with line-structured light vision sensor, a mathematical model based on the measuring principle of line-structured light vision sensor is established in the paper. Then, the positioning measurement error is analyzed in detail. The experimental results show that the method is valid and correct. In addition, an accurate measurement area which is from to away from the center of reference sphere is delimited through the statistical analysis of the experimental data. For the robot temperature compensation and calibration of flexible vision measurement system, this method effectively solves the positioning measurement problems about reference sphere with line-structured light vision sensor and has been applied in the industrial flexible online measurement systems successfully.

#### 1. Introduction

The line-structured light vision sensor consists of a line structured light projector (linear laser) and a camera [1]. This type of sensor has many advantages, such as simple structure, small volume, light weight, low cost, convenient calibration, and high accuracy of measurement. Hence, it is widely used in the industrial vision measuring fields [2, 3]. In particular in the flexible measurement system based on industrial robots [4], the line-structured light vision sensor has more prominent advantages at the aspect of flexibility and spatial accessibility than the stereo visual sensor (binocular or multi-camera vision sensor).

In the flexible online vision measurement system, each robot is configured with one line-structured light vision sensor generally. In principle, just the point in the structured light plan can be a positioning measurement. However, there are various types of the measured characteristics, such as edge (inflection point) [5], round hole (or ellipse hole) [6, 7], square hole (or rectangular hole), and sphere. Liu et al. [8] studied seam tracking based on a line-structured light vision sensor. Wu et al. [7] proposed a two-step method for spatial circle orientation with a line-structured light vision sensor and analyzed the orientation errors in detail. Theoretically, this method can also realize the spatial measurement of the symmetrical features, such as elliptical hole, square hole, and rectangular hole. However, few scholars have researched about the reference sphere positioning measurement with the line-structured light vision sensor.

Industrial robot is the motion platform in the flexible online measuring system. It is well known that industrial robot has high position repeatability but low absolute positioning accuracy [9]. In addition, every joint motor can generate a large amount of heat with the robot moving, which leads to the significant changes of robot joint length and other robot parameters [10]. Due to the extreme complexity of the joint temperature distribution, it is difficult to establish an accurate temperature distribution model for the calculation of parameters that change. The external auxiliary devices are generally introduced into the system for robot parameters calibration and temperature compensation [11]. The reference sphere is a spatial geometry symmetrical object. So there is no strict posture requirement for the vision sensor during the positioning measurement. During the robot parameters calibration and temperature compensation, the sphere is an ideal auxiliary object, and its center acts as the physical constraint of fixed point in the measurement space. Franke et al. [9] used four spheres with a known diameter and center points as the calibration objects for exterior orientation. Guo et al. [11] used the center of the sphere as the global control point in the measured field to compensate the positioning error of the robot. Therefore, the accurate positioning measurement method of the reference sphere with the line-structured light vision sensor is essential for the robot temperature compensation and parameters calibration.

To locate the reference sphere precisely, a mathematical model based on the measuring principle of the line-structured light vision sensor is established in the paper. Then, the positioning measurement error is analyzed in detail. Finally, experiments are carried out to verify the effectiveness of the mentioned measurement method and its measurement accuracy. Meanwhile, an accurate measurement area is delimited through the statistical analysis of the experimental data.

#### 2. Measuring Principle of the Line-Structured Light Vision Sensor

The line-structured light vision sensor contains a line-structured laser and a camera. The line-structured laser projects a fan-shaped light plane in space, which intersects with the measured characteristics. The camera captures the light stripe image modulated by the characteristics, and 3D coordinates of points in the light stripe can be derived by image processing, triangulation measuring principle, and mathematical model of the line-structured light vision sensor.

As shown in Figure 1, is the 3D world coordinate frame, and is the camera coordinate frame. Note that is the optical center of the camera, the axis and the axis are in keeping with the increasing direction of column and row on the image plane, respectively, and the axis is the optical axis of the camera. In addition, the image plane is perpendicular to the axis and has a distance of 1 to . is assumed to be the 2D image plane coordinate frame. is the intersection of the image plane and the optical axis . The axis and the axis are parallel with the axis and the axis, respectively. Let the camera coordinate frame be the sensor coordinate frame, so the equation of the light plane in can be used as the mathematical model of the line-structured light vision sensor. Assume that a point is on the light plane and its 3D coordinate in is . The corresponding homogeneous coordinates of are ; then, the equation of light plane in is given by where is the equation coefficient vector of light plane , namely, the parameters of the sensor [12]. These parameters can be achieved precisely by the sensor calibration [13–15].

If is the ideal projection of on, then , , and are collinear [16]. Let homogeneous coordinate of in be . The equation of the straight line can be represented as where is an arbitrary constant, except zero.

If and are known, can be obtained by (1) and (2).

The relationship between and follows the rigid body coordinate transformation, which can be built by means of auxiliary target [17].

#### 3. Mathematical Model of Reference Sphere Positioning Measurement

According to the measuring principle, the line-structured light vision sensor can measure the points on the light plane, but it cannot apply to the points out of the light plane [18]. When the light plane is projected on the reference sphere and the intersecting cross-section is the maximum circular cross-section in the sphere surface, 3D coordinates of points in the circumference of the largest cross-section can be obtained directly by the line-structured light vision sensor measuring model. If the 3D coordinate of a point in the cross-section circumference is , then where is the radius of the reference sphere and is the center of the largest fitting circular cross-section, that is, the center of the reference sphere.

In the actual process of the reference sphere positioning measurement, it is difficult to meet the requirement that the light plane intersects with the maximum circular cross-section. As shown in Figure 2, is the center of the reference sphere. The actual intersecting cross-section of the light plane and the reference sphere is . In the fitting circular cross-section, the center of the fitting circle is , the radius of the fitting circle is , and the normal vector of the plane is the normal vector of the light plane:

The maximum circular cross-section is parallel with the actual intersecting cross-section . is defined as the distance between and ; then,

The normal vector is consistent with the direction vector of the straight line ; thus, line can be expressed by

From (4) to (6), (7) can be derived as follows:

Combining (7) and (6), the coordinate of can be obtained.

#### 4. Error Analysis of Reference Sphere Positioning Measurement

In Figure 3, the angle between light plane and the surface normal at the measured point is defined as the projected angle.

As shown in Figure 4, the light plane intersects with the reference sphere. Assume that point is an arbitrary point on the circular cross-section. is the center of the reference sphere. is the center of the intersecting circle. The surface normal vector of the spherical surface at the point is consistent with the direction vector of line . The direction vector of line is perpendicular to the plane . According to the above definition, is the projected angle at point :

According to (8), the projected angle varies with the different , which represents the different intersecting section of the light stripe and the spherical surface. However, all points on an intersecting sections have the same projected angle . The curve that described the relationship between the projected angle and the radius of the circular cross-section is shown in Figure 5.

As shown in Figure 5, when the light plane is tangent with the reference sphere, namely, , there is . The projected angle reduces gradually with the light plane approaching to the maximum circular cross-section. When the light plane intersects with the reference sphere on the maximum circular cross-section, namely, , there is . The curve has approximate linear variation within a wide range. But when the light plane is near to the plane , the projected angle reduces quickly.

The actual light stripe which is produced by a semiconductor laser has the thickness, so the intersecting circular cross-section on the reference sphere surface has a certain width. Taking into account the factor of the projected angle, there will be a deviation between the geometric centers of light stripes which are in the measurement image and on the sphere surface, respectively. The larger the projected angle is, the much more deviation will be caused [19, 20]. Thus, a salient deviation can be yielded by a larger projected angle when light plane is close to the edge of the sphere. The deviation will reduce the accuracy of the fitting result of , the calculation of , and the reference sphere positioning measurement finally.

The errors of and caused by the above deviation are defined as and , respectively. The derivative of (5) is taken, and the relationship between and is obtained:

In Figure 6, the curve indicates that increases gradually when the light plane intersects with the reference sphere from the edge to the center, along with to . When the light plane and the reference sphere intersect near the edge of reference sphere, is very small, but the projected angle is large. There are larger and , and it will bring about the considerable positioning error of the sphere center. When the light plane is close to the maximum circular cross-section, the projected angle , the deviation, and are very small. But is very large; and the positioning error of the sphere center will be also intolerable. So there is an accurate measurement area where the line-structured light vision sensor can project light stripe on it.

#### 5. Experiments

Experiment setup is shown in Figure 7. The line-structured light vision sensor is mounted at the end effector of the robot. The reference sphere is fixed at a one-dimensional electric displacement rail whose repetitive positioning accuracy is less than 3 *μ*m, and the resolution of its grating ruler is 1 *μ*m. The reference sphere is ZrO_{2} ceramic ball (level G10, GB308-2002/ISO3290-1998) attached with developer, with a diameter of Ø25.4 mm. The line-structured light vision sensor is composed of a TELI CS8620BCi camera made by Toshiba, a customized lens, and a linear semiconductor laser LH650-4-5 made by Huanic. The spatial resolution of the camera is 768 × 576 (pixel), and the pixel size is 8.6 × 8.3 (*μ*m). The customized lens contains a piece of narrow-band pass filter that the central wavelength is 650 nm, and the half-width of the bandpass wave is 10 nm. The central wavelength of the semiconductor laser is 650 nm and the line width is less than 1 mm at the working distance of sensor. The parameters of the camera are calibrated precisely as follows:

The structure parameters of the line-structured light vision sensor are expressed as

Above all, adjust the robot pose to meet the following requirements. The reference sphere is in the sensor’s field of view and also at its working distance. The image plane of the camera is parallel with the plane of the guide rail roughly, and its horizontal direction is consistent with the moving direction of the guide rail. Then, stop the industrial robot to ensure the sensor is still. The reference sphere moves at several positions (1 mm interval) along with the guide rail and is measured simultaneously. The distances calculated by the positioning measurement results of the adjacent moving spheres are compared with the feedback data of the grating ruler. In the experiment, the start position of the light plane is on the middle of the right hemisphere. The sphere moves at 33 positions which passes through the maximum circular cross-section and terminates at the edge of the left hemisphere.

The fitting radius of the circular cross-section at different measuring positions and the distance error between the adjacent spheres are shown in Figure 8.

In Figure 8, the curve of the fitting circle’s radius indicates that the light stripe goes from the middle position of one half sphere to the edge of the other half sphere. The distance error curve shows that the positioning measurement errors are larger at the center and at the edge of the reference sphere (at the edge of the left hemisphere in experiment). The conclusion is consistent with the foregoing error analysis. Further statistical analysis shows that the measurement accuracy is higher around the middle of the half ball, which is shown between positions no. 18 and no. 32 in Figure 8. With experiments and statistical analysis, we get that the accurate measurement area is from to away from the center of reference sphere.

#### 6. Conclusions

Based on the measuring principle of the line-structured light vision sensor, a mathematical model of the reference sphere positioning measurement has been established in the paper, and the positioning measurement error has been analyzed in detail. The experimental results show that the method is valid and correct. In addition, an accurate measurement area which is from to away from the center of reference sphere is delimited through the statistical analysis of the experimental data. For the robot temperature compensation and calibration of flexible vision measurement system, this method effectively solves the positioning measurement problems about reference sphere with line-structured light vision sensor and has been applied in the industrial flexible online measurement systems successfully.

#### Acknowledgments

This work was funded by the National Natural Science Funds of China (61172120, 61372143) and the Natural Science Foundation of Tianjin in China (12JCQNJC02200, 13JCZDJC34800).

#### References

- F. Zhou and G. Zhang, “Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations,”
*Image and Vision Computing*, vol. 23, no. 1, pp. 59–67, 2005. View at Publisher · View at Google Scholar · View at Scopus - J. B. Park, S. H. Lee, and J. Lee, “Precise 3D lug pose detection sensor for automatic robot welding using a structured-light vision system,”
*Sensors*, vol. 9, no. 9, pp. 7550–7565, 2009. View at Publisher · View at Google Scholar · View at Scopus - B. Wu, T. Xue, T. Zhang, and S. Ye, “A novel method for round steel measurement with a multi-line structured light vision sensor,”
*Measurement Science and Technology*, vol. 21, no. 2, Article ID 025204, 5 pages, 2010. View at Publisher · View at Google Scholar · View at Scopus - F. J. Brosed, J. J. Aguilar, D. Guillomïa, and J. Santolaria, “3D geometrical inspection of complex geometry parts using a novel laser triangulation sensor and a robot,”
*Sensors*, vol. 11, no. 1, pp. 90–110, 2011. View at Publisher · View at Google Scholar · View at Scopus - Y. Bian, T. Guo, and G. X. Zhang, “Measuring method of the workpieces' shoulder characteristic size based on structured light,” in
*7th International Symposium on Instrumentation and Control Technology*, vol. 7129 of*Proceedings of SPIE*, October 2008. View at Publisher · View at Google Scholar - F. Zhou, G. Zhang, and J. Jiang, “High accurate non-contact method for measuring geometric parameters of spatial circle,”
*Chinese Journal of Scientific Instrument*, vol. 25, no. 5, pp. 604–607, 2004. View at Scopus - B. Wu, T. Xue, and S. Ye, “A two-step method for spatial circle orientation with a structured light vision sensor and error analysis,”
*Measurement Science and Technology*, vol. 21, no. 7, Article ID 075105, 2010. View at Publisher · View at Google Scholar · View at Scopus - S. Y. Liu, G. R. Wang, and J. G. Zhong, “Application and prospect of vision sensing system in robot welding,”
*Mechanical Science and Technology*, vol. 24, no. 11, pp. 1276–1300, 2005. - R. Franke, T. Bertram, M. Schulte, and C. Von Kopylow, “Development of a high accuracy automatic measurement system utilizing an industrial robot and a fringe projection system,” in
*Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications (TePRA '09)*, pp. 141–148, November 2009. View at Publisher · View at Google Scholar · View at Scopus - S. Eastwood and P. Webb, “Compensation of thermal deformation of a hybrid parallel kinematic machine,”
*Robotics and Computer-Integrated Manufacturing*, vol. 25, no. 1, pp. 81–90, 2009. View at Publisher · View at Google Scholar · View at Scopus - L. Guo, Y. J. Liang, J. C. Song, Z. Y. Sun, and J. G. Zhu, “Compensation for positioning error of industrial robot for flexible vision measuring system,” in
*8th International Symposium on Precision Engineering Measurement and Instrumentation*, vol. 8759 of*Proceedings of SPIE*, 2013. View at Publisher · View at Google Scholar - F. Zhou, Y. Cui, B. Peng, and Y. Wang, “A novel optimization method of camera parameters used for vision measurement,”
*Optics and Laser Technology*, vol. 44, no. 6, pp. 1840–1849, 2012. View at Publisher · View at Google Scholar · View at Scopus - F. Q. Zhou, Y. Cui, G. He, and Y. X. Wang, “Line-based camera calibration with lens distortion correction from a single image,”
*Optics and Lasers in Engineering*, vol. 51, no. 12, pp. 1332–1343, 2013. View at Publisher · View at Google Scholar - F. Q. Zhou, Y. Cui, Y. X. Wang, L. Liu, and H. Gao, “Accurate and robust estimation of camera parameters using RANSAC,”
*Optics and Lasers in Engineering*, vol. 51, no. 3, pp. 197–212, 2013. View at Publisher · View at Google Scholar - B. Zhang, Y. F. Li, and Y. H. Wu, “Self-recalibration of a structured light system via plane-based homography,”
*Pattern Recognition*, vol. 40, no. 4, pp. 1368–1377, 2007. View at Publisher · View at Google Scholar · View at Scopus - T. Xue, L. Q. Qu, Z. F. Cao, and T. Zhang, “Three-dimensional feature parameters measurement of bubbles in gas-liquid two-phase flow based on the virtual stereo vision,”
*Flow Measurement and Instrumentation*, vol. 27, pp. 29–36, 2012. View at Publisher · View at Google Scholar - N. Geng, J. Zhu, D. Lao, and S. Ye, “Theory and algorithm of coordinate system registration based on rigid body kinematics,”
*Chinese Journal of Sensors and Actuators*, vol. 23, no. 8, pp. 1088–1092, 2010. View at Publisher · View at Google Scholar · View at Scopus - Z. Xie, Q. Zhang, and G. Zhang, “Modeling and calibration of a structured-light-sensor-based five-axis scanning system,”
*Measurement*, vol. 36, no. 2, pp. 185–194, 2004. View at Publisher · View at Google Scholar · View at Scopus - H. Y. Feng, Y. Liu, and F. Xi, “Analysis of digitizing errors of a laser scanning system,”
*Precision Engineering*, vol. 25, no. 3, pp. 185–191, 2001. View at Publisher · View at Google Scholar · View at Scopus - Z. Xie, C. Zhang, and G. Zhang, “Error compensation for structured light sensors,”
*Chinese Journal of Scientific Instrument*, vol. 26, no. 7, pp. 667–725, 2005. View at Scopus