Research Article  Open Access
Xianglei Liu, Yi Tang, Jing Ma, "Accuracy Assessment for the ThreeDimensional Coordinates by HighSpeed Videogrammetric Measurement", Journal of Electrical and Computer Engineering, vol. 2018, Article ID 4058205, 5 pages, 2018. https://doi.org/10.1155/2018/4058205
Accuracy Assessment for the ThreeDimensional Coordinates by HighSpeed Videogrammetric Measurement
Abstract
Highspeed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of highspeed shaking table structure. The purpose of this paper is to validate the threedimensional coordinate accuracy of the shaking table structure acquired from the presented highspeed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the highspeed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the threedimensional spatial coordinates which certify that the proposed highspeed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.
1. Introduction
In the field of civil engineer, under the simulation environment, the dynamic response of many structures needs to be tested to further verify the seismic performance to reduce disaster losses and maintain social stability before their application in practice, and one of the most frequently used experimental platforms is earthquake shaking table which is a device for shaking structural models or building components with a simulated seismic wave [1]. Traditionally, the contacted transducers were mostly used to attach to the structure models to acquire the response of the shaking table structure which have a common limitation of only providing onedimensional information of single point information. Additionally, some new techniques, such as Global Positioning System (GPS), Laser Doppler Vibrometers (LDV), and Interferometric Synthetic Aperture Radar (InSAR) sensor [2–5], were also studied to capture the dynamic responses of the shaking table structures. However, all of the new techniques are impractical to monitor the highspeed shaking table structure.
Another potential technique for making the target tracking is videogrammetry, which has been widely used in the medical, automotive, and astronautical fields dating from the 1970s [6]. Videogrammetry is a direct extension of photogrammetry, which can calculate threedimensional coordinates of an object as a function of time from the simultaneously triggered images sequences and further perform threedimensional shape reconstruction and analyze the dynamic response of the shaking table structure [7]. For the shaking table experiment, it is difficult to obtain detailed dynamic response of the highspeed moving object using the common cameras with the frame frequency lower than about 20 fps. In recent years, with the rapid development of the sensor technology, especially the technique of Complementary MetalOxideSemiconductor (CMOS) which is a technology for constructing integrated circuits, the frame frequency of some highspeed CMOS cameras can achieve 1000 fps with the maximum resolution of 1280 (H) by 1024 (V) pixels. In theory, equipped with the highspeed CMOS cameras, the videogrammetric measurement technique can satisfy the requirements of monitoring the dynamic response of the shaking table structure. However, a lot of factors, such as the accuracy of the measurement of control points, the accuracy of camera synchronization, and the accuracy of the elliptical target detection, will determine the accuracy of the final threedimensional (3D) spatial coordinates of all of the tracking targets. Therefore, the objective of this paper is to study each key step of highspeed videogrammetric measurement for shaking table structure to validate the accuracy of the 3D spatial coordinates of the tracking target attached on the surface of the shaking table structure.
The rest of the paper is organized as follows. Section 2 introduces the configuration of highspeed videogrammetric measurement system. Section 3 introduces the key factors for the accuracy of videogrammetric measurement including the accuracy of camera calibration, the accuracy of camera synchronization, the accuracy of layout of the control network, the accuracy of the elliptical target detection, the accuracy of the highspeed camera synchronization, and the accuracy comparison of the 3D spatial coordinates of targets calculated from the two highspeed CMOS cameras and three highspeed CMOS cameras, respectively. At last, a conclusion is presented in Section 4.
2. Videogrammetric Measurement System
For the highspeed videogrammetric measurement, the highspeed videogrammetric measurement system plays the most important role which is the foundation of the videogrammetry. Generally, the highspeed videogrammetric measurement consists of highspeed CMOS cameras, host computers, a synchronous controller, and the capturing cards as shown in Figure 1.
3. Accuracy Analysis
To implement a completed videogrammetric measurement for the shaking table structure, five key links need to be done, including the camera calibration, the camera synchronization, the layout of the control points, the target detection and tracking, and the calculation of the 3D spatial coordinates of tracking points, containing many factors that will influence the accuracy of final results. In this section, we will make an analysis for the above key links of highspeed videogrammetric measurement, further, to validate the corresponding accuracy.
3.1. The Accuracy of Camera Calibration
The purpose of camera calibration is to determine the interior orientation parameters and the distortion parameters for the accurate calculation of the 3D spatial coordinates of the tracking target coordinates. Because the interior orientation parameters and the distortion parameters of a camera change in different temperature and humidity, thus, the camera needs to be calibrated timely before or after the experiment. Two dimensional camera selfcalibration is a good selection which is a mathematically rigorous approach for calibrating highprecision stages and uses the stage to calibrate itself which offers many advantages including the possibility of standardizing measurements of accuracy. During the process of camera calibration, a system of equations is obtained which include the parameters of interior orientation of a camera and distortion parameters of a lens as unknowns [8]. Figure 2 illustrates the imaging process of a videogrammetric camera. Here, is the focal length, is the principal point with the image coordinates of , and and denote the radial lens distortion and the decentering lens distortion, respectively. Therefore, any measured image point can be compensated bywhere is the corrected image point, are the component and component of radial lens distortion correction , are the component and component of decentering lens distortion correction .
The parameters of the interior orientation of a camera define the spatial position of the perspective centre, the principal distance, and the location of the principal point, and the distortion parameters include radial and tangential distortion which is the deviation from the principle of central perspective, which can be described by the coplanarity condition equation:where , , and are the pixel coordinates of image auxiliary coordinate system, is the rotation matrix, and are the measured coordinates of image points, and are the principal point coordinates, is the lens focal length, and and are the compensation of lens radial distortion in the and direction, respectively, and and are the compensation of lens tangential distortion in the and direction, respectively.
Radial distortion is the major imaging error parameter for the most camera systems. The radial distortion is usually modeled with polynomial series with distortion parameters to [9] which can be compensated by the following function:where ; , , and are the radial distortion parameters.
Tangential distortion is mainly caused by decentering and misalignment of the lens, whose effect is smaller relative to the radial distortion. Tangential lens distortion is considered only for the highprecision measurement, such as the videogrammetric measurement for shaking table structure in the paper. Tangential distortion can be compensated by the following function:
In the paper, two kinds of highspeed CMOS cameras, named MC 1311 and CR1000, were calibrated with the method of twodimensional selfcalibration using PhotoModeler software. The results show that standard deviation of focal length , the principal point , the width of transducer, and the height of the transducer are all higher than 0.006 mm which has the higher accuracy; the standard deviation of the first coefficient of radial distortion is better than 0.006 mm, the standard deviation of the second coefficient of radial distortion is better than 5 × 10^{−5} mm, and the standard deviation of two coefficients and of tangential distortion are both better than 2 × 10^{−5} mm.
3.2. The Accuracy of Camera Synchronization
The purpose of camera synchronous controller is to guarantee the camera synchronization. Generally, the camera synchronous controller is controlled by a host computer, and once a synchronous signal is sent by the host computer, thousands of stereo images will be captured by the highspeed CMOS cameras. Ideally, each stereo image should be captured at the same time; thus, the better the camera synchronization is, the higher the accuracy of resulting 3D spatial coordinates of the tracking targets is. Therefore, the accuracy of camera synchronization will directly influence the accuracy of stereo videogrammetry. It is critical to validate the synchronization accuracy to control the influence of synchronization on the ultimate accuracy of the results. In the paper, an experiment of a frame rate of 100 fps was adopted to capture images for the two synchronous highspeed CMOS cameras, and the obtained stereo images from both cameras were selected as an example to validate the synchronization accuracy, and the results show that the accuracy of the camera synchronous controller between the two cameras reaches 3 s.
3.3. The Layout of the Control Network
Videogrammetry is a branch of photogrammetry, especially close range photogrammetry, and the layout of the control network will also influence the accuracy of final results. The purpose of the layout of control points is to calculate the elements of exterior orientation to further calculate the 3D spatial coordinates of tracking points. For the videogrammetry, the convergent photography was the best method to acquire images with higher accuracy [10]. Before the experiment, plenty of control points need to be arranged around the shaking table structure, and electric total station will be used to measure the 3D spatial coordinates of the control points which will be used to calculate the parameters of the elements of exterior orientation. In the paper, SOKKIA SET230R electric total station, with the accuracy of 1′′ in angle measurement and ±1 mm/km in distance measurement, was used to measure the artificial paper’s target, and the accuracy can reach up to 0.2 mm.
3.4. The Accuracy of Elliptical Target Detection
For our videogrammetric measurement for shaking table structure, elliptical targets are adopted to monitor the dynamic response for the shaking table structure for its five degrees of freedom (DOF) comparing to the 2 DOFs for a line or point feature [11]. A circular target consisting of a black ring and a crosswire, shown in Figure 3(a), is used to attach to the shaking table structure as control points and tracking points. Morphological edge detection, ellipse extraction based on ellipse geometric attributes, and LSM are adopted to calculate the central pixel coordinates of the elliptical target. Figure 3(b) shows an image block containing an elliptical target from an image by the highspeed CMOS camera, Figure 3(c) shows the calculated position central pixel coordinates on the image block, and Figure 3(d) shows the fivetime amplification effect of the tracking target and the corresponding central coordinates. Adopting the above method, the RMS of fitting the central pixel coordinate achieve about 0.2 pixels.
(a)
(b)
(c)
(d)
3.5. The Accuracy Comparison of the 3D Spatial Coordinates from the Two and Three HighSpeed CMOS Cameras
After obtaining the central pixel of all the tracking points, an integrated bundle algorithm is adopted to calculate the 3D spatial coordinate of all the tracking points [12, 13]. In order to validate the accuracy of 3D spatial coordinates of tracking points by videogrammetric method, four control points named 3, 7, 8, and 11 were selected as checking points to calculate the corresponding 3D coordinates by two highspeed CMOS cameras and three highspeed CMOS cameras, respectively. From Tables 1 and 2, we can see that the difference of 3D spatial coordinates between the videogrammetric measurement and electric total station is lower than 1 mm. Furthermore, compared with the difference between the twocamera videogrammetry and electric total station, it is similar to the difference between the threecamera videogrammetry and electric total station, but more accurate.


4. Conclusion
Highspeed videogrammetric measurement is a new noncontacted technique for monitoring the dynamic response. In the paper, we have made detailed analysis for the whole steps to validate the feasibility and accuracy. Through the accuracy analysis, a conclusion can be made that the submillimeter accuracy can be achieved, and all of the intermediate processes satisfy the requirement of videogrammetric measurement for the shaking table structure. Therefore, the proposed highspeed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work has been funded by the National Natural Science Foundation of China (Grant 41501494), the Importation and Development of HighCaliber Talents Project of Beijing Municipal Institutions (Grant CIT&TCD201704053), and the Talent Program of Beijing University of Civil Engineering and Architecture.
References
 A. R. Ghaemmaghami and M. Ghaemian, “Experimental seismic investigation of Sefidrud concrete buttress dam model on shaking table,” Earthquake Engineering & Structural Dynamics, vol. 37, no. 5, pp. 809–823, 2008. View at: Publisher Site  Google Scholar
 A. Cunha, E. Caetano, and R. Delgado, “Dynamic tests on large cablestayed bridge,” Journal of Bridge Engineering, vol. 6, no. 1, pp. 54–62, 2001. View at: Publisher Site  Google Scholar
 M. Celebi and A. Sanli, “GPS in pioneering dynamic monitoring of longperiod structures,” Earthquake Spectra, vol. 18, no. 1, pp. 47–61, 2002. View at: Publisher Site  Google Scholar
 W.S. Chan, Y.L. Xu, X.L. Ding, Y.L. Xiong, and W.J. Dai, “Assessment of dynamic measurement accuracy of GPS in three directions,” Journal of Surveying Engineering, vol. 132, no. 3, pp. 108–117, 2006. View at: Publisher Site  Google Scholar
 K.T. Park, S.H. Kim, H.S. Park, and K.W. Lee, “The determination of bridge displacement using measured acceleration,” Engineering Structures, vol. 27, no. 3, pp. 371–378, 2005. View at: Publisher Site  Google Scholar
 J. Leifer, J. T. Black, S. W. Smith, N. Ma, and J. K. Lumpp, “Measurement of inplane motion of thinfilm structures using videogrammetry,” AIAA Journal of Spacecraft and Rockets, vol. 44, no. 6, pp. 1317–1325, 2007. View at: Publisher Site  Google Scholar
 E. M. Mikhail, J. S. Bethel, and J. C. McGlone, Introduction to Modern Photogrammetry, John Wiley & Sons, New York, NY, USA, 2001.
 E. SanzAblanedo, J. R. RodríguezPérez, P. AriasSánchez, and J. Armesto, “Metric potential of a 3D measurement system based on digital compact cameras,” Sensors, vol. 9, no. 6, pp. 4178–4194, 2009. View at: Publisher Site  Google Scholar
 D. C. Brown, “Decentering distortion of lenses,” Photogrammetric Engineering, vol. 32, pp. 444–462, 1966. View at: Google Scholar
 C. S. Fraser, “Network design,” in Close Range Photogrammetry and Machine Vision, K. B. Atkinson, Ed., pp. 256–281, Whittles, Dunbeath, UK, 1996. View at: Google Scholar
 Z. F. Alemdar, J. A. Browning, and J. Olafsen, “Photogrammetric measurements of RC bridge column deformations,” Engineering Structures, vol. 33, no. 8, pp. 2407–2415, 2011. View at: Publisher Site  Google Scholar
 R. N. Jiang and D. V. Jauregui, “Development of a digital closerange photogrammetric bridge deflection measurement system,” Measurement, vol. 43, no. 10, pp. 1431–1438, 2010. View at: Publisher Site  Google Scholar
 J. Leifer, B. J. Weems, S. C. Kienle, and A. M. Sims, “Threedimensional acceleration measurement using videogrammetry tracking data,” Experimental Mechanics, vol. 51, no. 2, pp. 199–217, 2011. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Xianglei Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.