Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2017 (2017), Article ID 8765450, 12 pages
https://doi.org/10.1155/2017/8765450
Research Article

Global Measurement Method for Large-Scale Components Based on a Multiple Field of View Combination

Key Laboratory for Precision and Non-Traditional Machining Technology of the Ministry of Education, Dalian University of Technology, Dalian 116024, China

Correspondence should be addressed to Wei Liu; nc.ude.tuld@7002wl

Received 1 March 2017; Revised 7 July 2017; Accepted 24 July 2017; Published 17 September 2017

Academic Editor: Vera Tyrsa

Copyright © 2017 Yang Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Considering the limited measurement range of a machine vision method for the three-dimensional (3D) surface measurement of large-scale components, a noncontact and flexible global measurement method combining a multiple field of view (FOV) is proposed in this paper. The measurement system consists of two theodolites and a binocular vision system with a transfer mark. The process of multiple FOV combinations is described, and a new global calibration method is proposed to solve the coordinate system unification issue of different instruments in the measurement system. In addition, a high-precision image acquisition method, which is based on laser stripe scanning and centre line extraction, is discussed to guarantee the measurement efficiency. With the measured 3D data, surface reconstruction of large-scale components is accomplished by data integration. Experiments are also conducted to verify the precision and effectiveness of the global measurement method.

1. Introduction

In the field of modern manufacturing and assembly, the accuracy of large-scale components of dimensions is required to guarantee automatic assembly of high-end equipment. To ensure that the components are manufactured as designed, the dimensions of the components have to be accurately measured [1, 2]. However, the measurement range for large components is too large to rapidly complete onsite measurements using a single instrument. For large high-precision parts that are used in aerospace and aviation, mark points are not allowed to be pasted on the surfaces of the components; that is, the measuring methods that only work with the marks pasted on the surfaces are not suitable. Large-scale components, such as wings of airplanes and large antenna radomes, are usually clamped onto a bracket to prevent deformation prior to assembly. As a result, the surfaces of these components are partly blocked from view, and the area of interest for the measurement instrument is invisible from a specific viewing angle. Thus, studies of noncontact and flexible global measurement methods for 3D surface measurement of large-scale components are important for high-quality assembly in aerospace and aviation.

Machine vision measurement has such advantages as noncontact, high efficiency, and high accuracy, so it has been extensively applied to industry [35]. Recently, researchers have proposed large-scale component measurement methods that are based on machine vision [68]. There are three main machine vision methods using in the measurement of large aviation components; they are photogrammetry [911], structured light method, and binocular vision [1215]. Photogrammetry is a method to collect quantity of reflective marks pasted on the surface of a part using a single camera and reconstruct the marks to represent the surface of the part. A system called V-STARS adopted this method to measure the lay-up tool of Boeing B320 aircraft [16]. The system consists of a high resolution camera and reflective marks which need to be affixed onto the surface of the measured part. The marking points can be reconstructed by the images captured by the camera. The system has the advantages of good portability. However, pasting the marking points is time-consuming, and it is difficult to measure the entire surface using marking points, especially the boundary of the surface; structured light method is to reconstruct the projecting light by building the triangulation between the monocular camera and the active projecting light (laser/grating). During the measurement, the camera and the position of the projecting light have to remain relatively still, which brings a problem that, to guarantee the measurement accuracy, the FOVs for measurement are a few. To measure parts using large FOV, measurement equipment needs to be driven by highly accurate movement mechanisms to scan parts, as a novel 3D scanning measurement technique for large components proposed by Jiang et al. [17]. However, in field of aviation assembly, limited to complex environments, the method that uses large and highly accurate movement mechanisms to achieve on site measurement for large components is impractical. A German company Gom measures large components with structured lights and photogrammetry and has developed a measurement system called ATOS based on profilometry with grating and another system called TRITOP based on photogrammetry with reflective marks, respectively. NASA combined the ATOS system with the TRITOP system to measure X-38 aircraft [18], whereas too many moves of the method are needed, and quantity of mark points has to be stuck on the surfaces of parts ahead of measurement. Binocular vision is a method that uses binocular cameras to collect features projected onto surface to be measured and reconstruct the surface; a light probe based large FOV 3D vision measurement system proposed by Feng and Wei, which is based on binocular vision, can measure the curved surface of a large-scale object [19]. However, keeping the aided target steady during the measuring process is difficult, and only the coordinates of the points on the curved surface can be measured using this method. The measuring process is complex, and the measurement accuracy is unstable. Therefore, a novel global measurement method for 3D surfaces of large-scale components, which is based on a multiple field of view (FOV) combination, is proposed in this paper.

First, the principle and measurement process of the proposed method are introduced. Second, the calibration method of the measurement system is presented. Third, the algorithms of laser stripe extraction, image matching, and reconstruction are described. Last, the flexibility and precision of the measurement method are verified by experiments, as discussed in Section 5.

2. Measurement Principle

The measurement system is a binocular vision system that consists of two theodolites and a transfer mark that is designed for the transition of views. In the measuring process, two theodolites that are placed in the back are employed as the global control station for field combination and data integration. The binocular vision system with the transfer mark is placed in the front to capture an image of the component surface. The global measurement coordinate system is established on the left theodolite to ensure that the binocular vision system can be placed in any position in the measurement range of the theodolites. By transferring the surface information that is captured by cameras in different positions into the global coordinate system, high-precision data integration can be achieved. The measurement principle of the system is shown in Figure 1.

Figure 1: Schematic of the measurement system.

As the measurement system consists of different types of instruments, the measurement coordinate systems, respectively, established on these instruments are different as well. To reconstruct an entire surface, the coordinates that are measured by these instruments must be unified in the global coordinate system. The Craig expression method of coordinate transformation is adopted in this paper to express the transformation relation [20]. The coordinate vector of any point in the coordinate system is expressed as where is the coordinate of point . Two theodolites are employed as the global measurement control station of the measurement system. The global coordinate system is established on the left theodolite.

The binocular vision system consists of two industry complementary metal oxide semiconductor (CMOS) cameras. The coordinate system of the left camera——is established on the left camera, with the origin on the optical centre of the camera, and the coordinate axis is in the same directions as the axis of the image sensor. Similarly, the right camera coordinate system——is established on the right camera.

In the process of measurement, the large component to be measured must be divided into multiple fields of view and measured several times; thus, the cameras must be moved to different positions. To connect the mobile camera coordinate systems with the fixed global coordinate system, a transfer mark is taken into use of the binocular vision system. The relative position between the mark and the cameras remains unchanged during the measurement. By measuring the feature points on the mark, two theodolites in the back can obtain the position and the direction of the binocular vision system. The coordinate system of the transfer marks and is the transformation matrix from the transfer mark coordinate system to the global coordinate system.

The measurement process of the proposed system is as follows: first, the system is calibrated, which includes the calibration of two theodolites, the binocular cameras, and their transformation relation which is calibrated using the transfer mark. Second, the surface of the large-scale component is artificially divided into several parts and separately measured by the binocular cameras at different positions; based on binocular vision method with assisted laser, the 3D point cloud data of measured part is obtained [21]. Third, feature points on the transfer mark are measured by two theodolites for coordinate system transformation. Last, the data of every measured part obtained by the cameras at different positions are integrated into the global coordinate system. With fusion of the obtained data, the component with overall size is measured. With the data obtained, the entire surface of the large component can be reconstructed.

3. Calibration of the Measurement System

The calibration of the measurement system is the foundation of the 3D measurements. The proposed measurement system consists of different instruments. Thus, the high-precision calibration of all instruments is essential for ensuring the accuracy of large field measurements.

To measure the large surface of a component, the binocular vision system must be moved several times to acquire regional characteristic information. Due to the changed measuring position, the spatial location of the camera coordinate system varies in the meantime. Thus, an accurate transformation relationship between the camera coordinate system and the global coordinate system must be established.

In this paper, the global coordinate system is established on the left theodolite. The two-theodolite system is calibrated according to the two-theodolite spatial three-dimensional (3D) measurement model. The binocular cameras are calibrated using Zhang’s calibration method [22]. By measuring the coordinates of the feature points on the transfer mark with the theodolites, the relation among the theodolites, the binocular cameras, and the transfer mark can be obtained. In this manner, global calibration is accomplished.

3.1. Calibration of Two Theodolites

The global coordinate system is established on the left theodolite according to the perspective projection model [23, 24]. As shown in Figure 2, the 3D coordinate system of the left theodolite is constructed with the origin on the observation centre . The two-dimensional (2D) image coordinate system of the left theodolite is established at in . The relation between and can be calculated based on the perspective projection model aswhere is the scale factor, is the horizontal angle, and is the vertical angle obtained by the theodolite. For ensuring the accuracy of measurement, the degree of the horizontal angle and the vertical angle should not be about 0 degrees or 90 degrees. and are the coordinates in the 2D image coordinate system of the left theodolite and the coordinates in the 3D coordinate system of the left theodolite, respectively.

Figure 2: Coordinate systems for the two theodolites.

Similarly, the 3D coordinate system and the 2D image coordinate system of the right theodolite are established, and the relationship between these two systems is calculated.

The coordinate vector of point P in is expressed as , and is the coordinate vector of point P in . The relationship between and is expressed aswhere , , is the coordinate rotation matrix, and is the coordinate translation matrix.

According to (2) and (3), we can obtain

By measuring a target of certain length thrice using the two theodolites, the rotation matrix and translation matrix are obtained.

In the global coordinate system, the coordinate vector of any point in the FOV is expressed aswhere is the horizontal angle, is the vertical angle obtained by the left theodolite, and is the horizontal angle obtained by the right theodolite.

3.2. Calibration of Binocular Cameras

Two industrial cameras with high resolution are employed in the 3D surface measurement system. To measure the surface of the large-scale component that is blocked, the binocular cameras must be moved to different positions.

The pixel coordinates vector of point in the image coordinate system is , and the coordinates vector in the World Coordinate System is . After the calibration of the cameras, the relation between and can be described bywhere is the intrinsic matrix of the binocular vision system and is the extrinsic matrix. is the coordinate rotation matrix and is the coordinate translation matrix. The coordinate system of the left camera is considered as the global coordinate system, so the transformation matrix, the rotation matrix, and the translation matrix from the coordinate system of the right camera to the global coordinate system are set to be , , and , respectively.

3.3. Calibration of the Relationship between the Binocular Cameras and the Transfer Mark

In the process of measurement, the transfer mark is essential for the transformation from the camera coordinate system to the global coordinate system.

As the transfer mark is not in the opposite direction to the binocular cameras, the feature points on the mark cannot be captured by the cameras. The transformation relation between the camera coordinate system and the transfer mark coordinate system cannot be directly established. As a result, the calibration process must be divided into the following two steps.

(1) Transformation from the Transfer Mark Coordinate System to the Global Coordinate System. To guarantee the transformation precision, a high-precision checkerboard is employed as the transfer mark. The checkerboard is placed in the opposite direction to the binocular cameras on the bracket, as shown in Figure 3.

Figure 3: Binocular-camera measurement system with the transfer mark.

The coordinate system of the transfer mark is established with the origin at the centre point of the checkerboard . The directions of the axis and the axis are defined, as shown in Figure 3. The direction of axis is defined by the coordinate right-hand rule.

By measuring the feature points on the transfer mark with two theodolites, the transformation matrix between the transfer mark coordinate system and the global coordinate system is acquired. The transformation relation can be expressed aswhere is the rotation matrix from the transfer mark coordinate system to the global coordinate system and is the translation matrix.

(2) Transformation from the Binocular-Camera Coordinate System to the Global Coordinate System. The binocular-camera coordinate system is constructed on the left camera. The axis on the image plane is horizontal, and the axis on the image plane is vertical, as shown in Figure 3. First, feature points on the transfer mark are measured by two theodolites and reconstructed in the global coordinate system. Second, the same points are measured by binocular cameras and reconstructed in the binocular-camera coordinate system. With the two groups of coordinates, the transformation relation between the two coordinate systems can be written as follows:where is the rotation matrix from the binocular-camera coordinate system to the global coordinate system and is the translation matrix.

(3) Transformation from the Binocular-Camera Coordinate System to the Transfer Mark Coordinate System. As and were determined in the preceding section, the transformation matrix from the binocular-camera coordinate system to the transfer mark coordinate system can be obtained based on the invariance of the spatial vector. The transformation relation can be expressed as follows:

After the calibration, the relative position between the binocular cameras and the transfer mark remains unchanged. In the process of measurement, data transformation from the binocular-camera coordinate system to the global coordinate system is achieved by calibrating with the two theodolites.

4. Acquisition and Reconstruction of a Large-Scale Three-Dimensional Surface of a Component

4.1. Local Image Processing and Reconstruction

In a single FOV, the feature information of the component is measured using a laser scanning method. The assisted laser stripes that are projected on the surface of the component are captured by a binocular vision measurement system.

Prior to reconstructing the laser stripes, the centres of the stripes are extracted using a grey centroid method [21]. The corresponding centre points of the left and right images are matched by the polar constraint of the cameras of the binocular vision system. The matching equation is expressed aswhere is the coordinate vector of the laser stripe centre point in the left image and is the coordinate vector of the laser stripe centre point in the right image. The fundamental matrix can be calculated using two calibration target points with an exact distance [25].

After matching the feature points of the left and right images, reconstruction of the image can be realized according to the binocular reconstruction principle. The coordinate vector of any point in the camera coordinate system can be expressed aswhere and . is the rotation matrix from the right camera relative to the left camera, and is the translation matrix from the right camera to the left camera; they can be obtained by the calibration of the binocular cameras. are the coordinates of point in the image coordinate system of the left camera; are the coordinates of point in the image coordinate system of the right camera. and are the effective focal length of the left camera and the effective focal length of the right camera, respectively.

4.2. Reconstruction of a 3D Surface of a Large-Scale Component

A binocular vision system is utilized to capture the large-scale component in different positions. Measuring the spatial positions of the feature points on the transfer mark with the two theodolites, in different positions can be calculated. With the intrinsic parameters that are calibrated in the laboratory and the extrinsic parameters that are calibrated onsite, the local coordinates are transferred to global coordinates according to (8) and (10). The global reconstruction of a 3D surface of a large-scale component can be realized.

5. Measurement Experiments

The global measurement system for the 3D surface of a large-scale component was set up in a laboratory, as shown in Figure 4. The binocular vision system in the front consists of two CMOS cameras (Vieworks, VC-12 MC-M/C 65 with 35 mm lenses and a resolution of , the principled limitation of the resolution is about 0.5 mm) and a transfer mark (checkerboard of checkers with an interval of 30 mm and an accuracy of 0.5 μm,). Two theodolites (Kolida, accuracy of 1′′) are set in the back. According to the presented global calibration method, the theodolites and the relationship between the mark and the binocular cameras are precisely calibrated in the laboratory. The reconstruction experiment is conducted with this measurement system.

Figure 4: Global measurement system for 3D surface measurement.
5.1. Calibration of the Global Measurement System and Accuracy Evaluation

According to the calibration method proposed in Section 3, the calibration experiments of the measurement system are conducted in the laboratory.

(1) Calibration of the System Parameters. The transformation matrix is calibrated. Based on the coordinates of six spatial feature points and the orthogonal constraint condition of the rotation matrices, the rotation matrix and the translation matrix of the two-theodolite system are obtained as follows:The global coordinate system is constructed on the left theodolite.

The calibration results of the binocular cameras are shown as follows:

The intrinsic parameters of the left and right cameras arewhere and are the intrinsic parameters matrices of the left camera and the right camera, respectively.

The extrinsic parameters of the binocular cameras are

According to the calibration method that was discussed in Section 3.2, is obtained as follows:

(2) Calibration of Global Parameters. Based on the calibration principle of the two theodolites, the global measurement system is calibrated on one spot. Feature points on the transfer mark are measured using the two theodolites. Thus, can be obtained as follows:

With and , any point in the FOV can be reconstructed in the global coordinate system.

To evaluate the measurement accuracy, a long one-dimensional (1D) target with two characteristic points (the distance between the two points is 1225.0214 mm) is considered. The measurement FOV is divided into two parts. The evaluation process is as follows: first, the target is placed in front of the cameras, as shown in Figure 5. Second, we used the cameras to measure the left characteristic point of the target and then moved the cameras to the second position to measure the right characteristic point of the target. Last, the coordinates of the two characteristic points that were measured by the binocular cameras at two positions are integrated into the global coordinate system. To guarantee repeatability, the 1D target is placed at three different positions and the evaluation of this process is repeated three times. The targets that were reconstructed in the global coordinate system are shown in Figure 6. To evaluate the accuracy of the measurement system, the deviation between the real length and the measured length of the 1D target is calculated and listed in Table 1. The results indicate a maximum global measurement deviation of 0.103% for the proposed method.

Table 1: Evaluation of the measurement accuracy (mm).
Figure 5: Target measurement experiment. (a) 1D long target with two feature points. (b) First measurement position of the cameras.
Figure 6: Reconstruction of the target at three different positions.
5.2. 3D Measurement of the Global Measurement System

To verify the feasibility of the proposed method, a standard flat part with the size of 600 mm × 800 mm is measured in the laboratory. Before the experiments, the standard flat part was measured using a three-coordinate measuring machine (Zeiss Prismo Navigator) to obtain the measuring reference data. By comparing the reconstruction results with the actual value, the construction accuracy of the global measurement system is validated. The experimental system is shown in Figure 7. According to the calibration calculation of global system, the calibration results of binocular cameras and global system are illustrated in Tables 2 and 3, respectively.

Table 2: The calibration results of binocular cameras.
Table 3: The calibration results of global system.
Figure 7: Reconstruction of the target at different positions.

In the experiment, the measurement FOV is divided into four parts based on the size of the plate. Next, the four parts of the board are measured by the respective cameras at different positions. Images of the laser stripes, which are projected on the board, are captured by the cameras. The centres of laser stripes are extracted and matched based on image processing method in Section 4.1. By monitoring the transfer mark with the two theodolites, the local 3D data from two measurements are united with the global coordinate system. The results of feature extraction and transformation matrix of different positions are shown in Table 4. The reconstructed results are shown in Figure 8. The 3D reconstruction points of measured part are compared with the ideal standard plane. The experimental results reveal the construction deviation is 0.14% with the corresponding measurement field.

Table 4: The results of feature extraction and transformation in different positions.
Figure 8: Reconstruction results of the large plate in the global coordinate system.

To verify the validity of the system, a curved composite part is measured. The part is shown in Figure 9(a) and the reconstruction result is shown in Figure 9(b). The results indicate that the proposed system can effectively measure curved components.

Figure 9: Experimental results of a curved composite part in the global coordinate system. (a) The measured curved composite part. (b) Reconstruction results.

6. Conclusions

In this paper, a global measurement method that is based on a multiple FOV combination was proposed for measuring the 3D surface of a large-scale component. Compared with existing methods, the proposed method has the advantage of high efficiency and no requirement for pasting a target mark on the component. As the cameras with the transfer mark can be placed at any position in the measurement range of theodolites, the measurement system is more flexible for large-scale component measurement. The experimental results in the laboratory indicate that a maximum accuracy of the measurement system of 0.103% can be attained when the length of the 1D target is approximately 1.225 m. For the large board measurement experiment, the reconstruction accuracy is less than 0.14%. Thus, the proposed method is practicable and suitable for measuring the 3D surface of a large-scale component in an industrial site. The measurement system can be employed in the assembly process of large-scale industry components. Additional research is suggested to improve the global measurement accuracy by optimizing the calibration process and improving the calibration precision.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This paper is supported by the National Basic Research Program of China 973 Project (Grant no. 2014CB046504), the National Science Foundation for Outstanding Young Scholars of China (no. 51622501), the National Natural Science Foundation of China (Grant no. 51375075), and the Science Fund for Creative Research Groups (Grant no. 51621064).

References

  1. H. Kieu, T. Pan, Z. Wang, M. Le, H. Nguyen, and M. Vo, “Accurate 3D shape measurement of multiple separate objects with stereo vision,” Measurement Science and Technology, vol. 25, no. 3, Article ID 035401, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. Z. Liu, J. Zhu, L. Yang, H. Liu, J. Wu, and B. Xue, “A single-station multi-tasking 3D coordinate measurement method for large-scale metrology based on rotary-laser scanning,” Measurement Science and Technology, vol. 24, no. 10, Article ID 105004, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. S. Aghaie, S. Khanmohammadi, H. Moghadam-Fard, and F. Samadi, “Adaptive vision-based control of robot manipulators using the interpolating polynomial,” Transactions of the Institute of Measurement and Control, vol. 36, no. 6, pp. 837–844, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Optical Engineering, vol. 39, no. 1, pp. 10–22, 2000. View at Publisher · View at Google Scholar · View at Scopus
  5. D. Ponsa, J. Serrat, and A. M. López, “On-board image-based vehicle detection and tracking,” Transactions of the Institute of Measurement and Control, vol. 33, no. 7, pp. 783–805, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. E. Shi, J. Guo, H. Zhou, and W. Shao, “Study on on-line measurement technology for large-scale sheet parts with free-form surface,” Chinese Journal of Scientific Instrument, vol. 9, article 004, 2009. View at Google Scholar
  7. Y. Ling and Y. Jiahe, “The 3D surface measurement and simulation for turbine blade surface based on color encoding structural light,” International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 8, no. 3, pp. 273–280, 2015. View at Publisher · View at Google Scholar
  8. J. Shi, Z. Sun, and S. Bai, “Large-scale three-dimensional measurement via combining 3D scanner and laser rangefinder,” Applied Optics, vol. 54, no. 10, pp. 2814–2823, 2015. View at Publisher · View at Google Scholar · View at Scopus
  9. C. Mei, Z. Fei, and Y. Zhang, “Virtual surface measurement of large deployable space antenna structure,” Advanced Materials Research, vol. 479-481, pp. 2586–2592, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. C. Schwartz, R. Sarlette, M. Weinmann, M. Rump, and R. Klein, “Design and implementation of practical bidirectional texture function measurement devices focusing on the developments at the university of Bonn,” Sensors, vol. 14, no. 5, pp. 7753–7819, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. Z. Jia, X. Ma, W. Liu et al., “Pose measurement method and experiments for high-speed rolling targets in a wind tunnel,” Sensors, vol. 14, no. 12, pp. 23933–23953, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. K. Iwata, Y. Sando, K. Satoh, and K. Moriwaki, “Application of generalized grating imaging to pattern projection in three-dimensional profilometry,” Applied Optics, vol. 50, no. 26, pp. 5115–5121, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. F. J. Brosed, J. J. Aguilar, D. Guillomïa, and J. Santolaria, “3D geometrical inspection of complex geometry parts using a novel laser triangulation sensor and a robot,” Sensors, vol. 11, no. 1, pp. 90–110, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. Q. K. Dang, Y. Chee, D. D. Pham, and Y. S. Suh, “A virtual blind cane using a line laser-based vision system and an inertial measurement unit,” Sensors, vol. 16, no. 1, article 95, 2016. View at Publisher · View at Google Scholar · View at Scopus
  15. Z.-F. Zhang, Z. Gao, Y.-Y. Liu et al., “Computer vision based method and system for online measurement of geometric parameters of train wheel sets,” Sensors, vol. 12, no. 1, pp. 334–346, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. Aerospace Applications, http://www.geodetic.com/applications/aerospace.aspx.
  17. H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: a novel 3-D scanning technique for high-reflective surfaces,” Optics and Lasers in Engineering, vol. 50, no. 10, pp. 1484–1493, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. Fuselage and Cabin, http://www.gom.com/industries/aerospace/fuselage-cabin.html.
  19. P. Feng and Z.-Z. Wei, “Light probe based large FOV 3D vision measurement system,” Guangxue Jingmi Gongcheng/Optics and Precision Engineering, vol. 21, no. 9, pp. 2217–2224, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. J. J. Craig, Introduction to Robotics. Mechanics and Control, Series in Electrical and Computer Engineering: Control Engineering, Addison-Wesley, Reading, Mass, USA, 1989.
  21. W. Liu, Y. Zhang, F. Yang et al., “A measurement method for large parts combining with feature compression extraction and directed edge-point criterion,” Sensors, vol. 17, no. 1, article 40, 2017. View at Publisher · View at Google Scholar · View at Scopus
  22. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at Publisher · View at Google Scholar · View at Scopus
  23. F. Zhou, G. Zhang, J. Jiang, B. Wu, and S. Ye, “Three-dimensional coordinate measuring systemwith binotheodolites on site,” Chinese Journal of Mechanical Engineering, vol. 1, article 032, 2004. View at Google Scholar
  24. Z. Zhang, J. Zhu, N. Geng, H. Zhou, and S. Ye, “The design of double-theodolite 3D coordinate measurement system,” Chinese Journal of Sensors and Actuators, vol. 5, article 014, 2010. View at Google Scholar
  25. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2nd edition, 2003. View at MathSciNet