Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement
High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.
In the field of civil engineer, under the simulation environment, the dynamic response of many structures needs to be tested to further verify the seismic performance to reduce disaster losses and maintain social stability before their application in practice, and one of the most frequently used experimental platforms is earthquake shaking table which is a device for shaking structural models or building components with a simulated seismic wave . Traditionally, the contacted transducers were mostly used to attach to the structure models to acquire the response of the shaking table structure which have a common limitation of only providing one-dimensional information of single point information. Additionally, some new techniques, such as Global Positioning System (GPS), Laser Doppler Vibrometers (LDV), and Interferometric Synthetic Aperture Radar (InSAR) sensor [2–5], were also studied to capture the dynamic responses of the shaking table structures. However, all of the new techniques are impractical to monitor the high-speed shaking table structure.
Another potential technique for making the target tracking is videogrammetry, which has been widely used in the medical, automotive, and astronautical fields dating from the 1970s . Videogrammetry is a direct extension of photogrammetry, which can calculate three-dimensional coordinates of an object as a function of time from the simultaneously triggered images sequences and further perform three-dimensional shape reconstruction and analyze the dynamic response of the shaking table structure . For the shaking table experiment, it is difficult to obtain detailed dynamic response of the high-speed moving object using the common cameras with the frame frequency lower than about 20 fps. In recent years, with the rapid development of the sensor technology, especially the technique of Complementary Metal-Oxide-Semiconductor (CMOS) which is a technology for constructing integrated circuits, the frame frequency of some high-speed CMOS cameras can achieve 1000 fps with the maximum resolution of 1280 (H) by 1024 (V) pixels. In theory, equipped with the high-speed CMOS cameras, the videogrammetric measurement technique can satisfy the requirements of monitoring the dynamic response of the shaking table structure. However, a lot of factors, such as the accuracy of the measurement of control points, the accuracy of camera synchronization, and the accuracy of the elliptical target detection, will determine the accuracy of the final three-dimensional (3D) spatial coordinates of all of the tracking targets. Therefore, the objective of this paper is to study each key step of high-speed videogrammetric measurement for shaking table structure to validate the accuracy of the 3D spatial coordinates of the tracking target attached on the surface of the shaking table structure.
The rest of the paper is organized as follows. Section 2 introduces the configuration of high-speed videogrammetric measurement system. Section 3 introduces the key factors for the accuracy of videogrammetric measurement including the accuracy of camera calibration, the accuracy of camera synchronization, the accuracy of layout of the control network, the accuracy of the elliptical target detection, the accuracy of the high-speed camera synchronization, and the accuracy comparison of the 3D spatial coordinates of targets calculated from the two high-speed CMOS cameras and three high-speed CMOS cameras, respectively. At last, a conclusion is presented in Section 4.
2. Videogrammetric Measurement System
For the high-speed videogrammetric measurement, the high-speed videogrammetric measurement system plays the most important role which is the foundation of the videogrammetry. Generally, the high-speed videogrammetric measurement consists of high-speed CMOS cameras, host computers, a synchronous controller, and the capturing cards as shown in Figure 1.
3. Accuracy Analysis
To implement a completed videogrammetric measurement for the shaking table structure, five key links need to be done, including the camera calibration, the camera synchronization, the layout of the control points, the target detection and tracking, and the calculation of the 3D spatial coordinates of tracking points, containing many factors that will influence the accuracy of final results. In this section, we will make an analysis for the above key links of high-speed videogrammetric measurement, further, to validate the corresponding accuracy.
3.1. The Accuracy of Camera Calibration
The purpose of camera calibration is to determine the interior orientation parameters and the distortion parameters for the accurate calculation of the 3D spatial coordinates of the tracking target coordinates. Because the interior orientation parameters and the distortion parameters of a camera change in different temperature and humidity, thus, the camera needs to be calibrated timely before or after the experiment. Two- dimensional camera self-calibration is a good selection which is a mathematically rigorous approach for calibrating high-precision stages and uses the stage to calibrate itself which offers many advantages including the possibility of standardizing measurements of accuracy. During the process of camera calibration, a system of equations is obtained which include the parameters of interior orientation of a camera and distortion parameters of a lens as unknowns . Figure 2 illustrates the imaging process of a videogrammetric camera. Here, is the focal length, is the principal point with the image coordinates of , and and denote the radial lens distortion and the decentering lens distortion, respectively. Therefore, any measured image point can be compensated bywhere is the corrected image point, are the -component and -component of radial lens distortion correction , are the -component and -component of decentering lens distortion correction .
The parameters of the interior orientation of a camera define the spatial position of the perspective centre, the principal distance, and the location of the principal point, and the distortion parameters include radial and tangential distortion which is the deviation from the principle of central perspective, which can be described by the coplanarity condition equation:where , , and are the pixel coordinates of image auxiliary coordinate system, is the rotation matrix, and are the measured coordinates of image points, and are the principal point coordinates, is the lens focal length, and and are the compensation of lens radial distortion in the and direction, respectively, and and are the compensation of lens tangential distortion in the and direction, respectively.
Radial distortion is the major imaging error parameter for the most camera systems. The radial distortion is usually modeled with polynomial series with distortion parameters to  which can be compensated by the following function:where ; , , and are the radial distortion parameters.
Tangential distortion is mainly caused by decentering and misalignment of the lens, whose effect is smaller relative to the radial distortion. Tangential lens distortion is considered only for the high-precision measurement, such as the videogrammetric measurement for shaking table structure in the paper. Tangential distortion can be compensated by the following function:
In the paper, two kinds of high-speed CMOS cameras, named MC 1311 and CR1000, were calibrated with the method of two-dimensional self-calibration using PhotoModeler software. The results show that standard deviation of focal length , the principal point , the width of transducer, and the height of the transducer are all higher than 0.006 mm which has the higher accuracy; the standard deviation of the first coefficient of radial distortion is better than 0.006 mm, the standard deviation of the second coefficient of radial distortion is better than 5 × 10−5 mm, and the standard deviation of two coefficients and of tangential distortion are both better than 2 × 10−5 mm.
3.2. The Accuracy of Camera Synchronization
The purpose of camera synchronous controller is to guarantee the camera synchronization. Generally, the camera synchronous controller is controlled by a host computer, and once a synchronous signal is sent by the host computer, thousands of stereo images will be captured by the high-speed CMOS cameras. Ideally, each stereo image should be captured at the same time; thus, the better the camera synchronization is, the higher the accuracy of resulting 3D spatial coordinates of the tracking targets is. Therefore, the accuracy of camera synchronization will directly influence the accuracy of stereo videogrammetry. It is critical to validate the synchronization accuracy to control the influence of synchronization on the ultimate accuracy of the results. In the paper, an experiment of a frame rate of 100 fps was adopted to capture images for the two synchronous high-speed CMOS cameras, and the obtained stereo images from both cameras were selected as an example to validate the synchronization accuracy, and the results show that the accuracy of the camera synchronous controller between the two cameras reaches 3 s.
3.3. The Layout of the Control Network
Videogrammetry is a branch of photogrammetry, especially close range photogrammetry, and the layout of the control network will also influence the accuracy of final results. The purpose of the layout of control points is to calculate the elements of exterior orientation to further calculate the 3D spatial coordinates of tracking points. For the videogrammetry, the convergent photography was the best method to acquire images with higher accuracy . Before the experiment, plenty of control points need to be arranged around the shaking table structure, and electric total station will be used to measure the 3D spatial coordinates of the control points which will be used to calculate the parameters of the elements of exterior orientation. In the paper, SOKKIA SET230R electric total station, with the accuracy of 1′′ in angle measurement and ±1 mm/km in distance measurement, was used to measure the artificial paper’s target, and the accuracy can reach up to 0.2 mm.
3.4. The Accuracy of Elliptical Target Detection
For our videogrammetric measurement for shaking table structure, elliptical targets are adopted to monitor the dynamic response for the shaking table structure for its five degrees of freedom (DOF) comparing to the 2 DOFs for a line or point feature . A circular target consisting of a black ring and a cross-wire, shown in Figure 3(a), is used to attach to the shaking table structure as control points and tracking points. Morphological edge detection, ellipse extraction based on ellipse geometric attributes, and LSM are adopted to calculate the central pixel coordinates of the elliptical target. Figure 3(b) shows an image block containing an elliptical target from an image by the high-speed CMOS camera, Figure 3(c) shows the calculated position central pixel coordinates on the image block, and Figure 3(d) shows the five-time amplification effect of the tracking target and the corresponding central coordinates. Adopting the above method, the RMS of fitting the central pixel coordinate achieve about 0.2 pixels.
3.5. The Accuracy Comparison of the 3D Spatial Coordinates from the Two and Three High-Speed CMOS Cameras
After obtaining the central pixel of all the tracking points, an integrated bundle algorithm is adopted to calculate the 3D spatial coordinate of all the tracking points [12, 13]. In order to validate the accuracy of 3D spatial coordinates of tracking points by videogrammetric method, four control points named 3, 7, 8, and 11 were selected as checking points to calculate the corresponding 3D coordinates by two high-speed CMOS cameras and three high-speed CMOS cameras, respectively. From Tables 1 and 2, we can see that the difference of 3D spatial coordinates between the videogrammetric measurement and electric total station is lower than 1 mm. Furthermore, compared with the difference between the two-camera videogrammetry and electric total station, it is similar to the difference between the three-camera videogrammetry and electric total station, but more accurate.
High-speed videogrammetric measurement is a new noncontacted technique for monitoring the dynamic response. In the paper, we have made detailed analysis for the whole steps to validate the feasibility and accuracy. Through the accuracy analysis, a conclusion can be made that the submillimeter accuracy can be achieved, and all of the intermediate processes satisfy the requirement of videogrammetric measurement for the shaking table structure. Therefore, the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work has been funded by the National Natural Science Foundation of China (Grant 41501494), the Importation and Development of High-Caliber Talents Project of Beijing Municipal Institutions (Grant CIT&TCD201704053), and the Talent Program of Beijing University of Civil Engineering and Architecture.
E. M. Mikhail, J. S. Bethel, and J. C. McGlone, Introduction to Modern Photogrammetry, John Wiley & Sons, New York, NY, USA, 2001.
D. C. Brown, “Decentering distortion of lenses,” Photogrammetric Engineering, vol. 32, pp. 444–462, 1966.View at: Google Scholar
C. S. Fraser, “Network design,” in Close Range Photogrammetry and Machine Vision, K. B. Atkinson, Ed., pp. 256–281, Whittles, Dunbeath, UK, 1996.View at: Google Scholar