Abstract

We propose a photogrammetric board to measure the deformation of a railroad bridge using close-range photogrammetry. The method can be used to compute the exterior orientation parameters and determine three-dimensional (3D) coordinates from images without measuring the control points. The bridge deformation measured using the proposed method was compared to that measured with a 3D laser tracker. The measurement error was within 1 mm, and the proposed method can measure the deformation of an I-plate girder of a railroad bridge. This method may be an alternative to precise stability inspections and bridge inspections.

1. Introduction

Rail bridges have been installed in Korea since 1899 and still remain in use throughout the country. Many people are involved in their maintenance and repair, which are costly. When there is a slight sagging or twisting of a bridge plate, it is difficult to analyze it, which can limit the establishment of long-term maintenance plans [1].

Two types of sensors are normally used to measure the deformation of a railroad bridge. The first type is contact sensors, such as linear variable differential transformers (LVDT) and electrical strain gages, which need to be attached to the bridge [2]. The second type is the noncontact sensors that use lasers. It is not easy to set up contact-type sensors at the target positions, and they are inconvenient to use. The attachment of sensors may also distort the movement characteristics of the bridge. Non-contact-type sensors have sufficient measurement resolution for small spans [3], but they have limited measurement distances in many cases and require relatively expensive devices, such as laser trackers. In addition, both methods generally measure only unidirectional deformation at a limited point on a bridge, and many sensors are required to measure many points [4, 5].

The photogrammetric technique is a noncontact method based on cameras. The targets of this technique can be made of just pieces of papers or pen marks, and the cameras do not need access to the objects. Therefore, there is no mass loading effect, in contrast to conventional sensors such as LVDT. This methodology enables full-field measurement of a structure because there is no limit in the number of targets. Moreover, when unique features on the structure surface are utilized, such as corners, the targets may not even be necessary [6].

This technique has especially been applied in various fields to measure the three-dimensional (3D) geometry of physical objects from two-dimensional photographs [710]. The technique could replace costly measurement equipment to inspect civil structures. Process automation is also possible, and the method allows more secure observations for deformation monitoring [11].

Barazzetti and Scaioni [4] measured the degree of bending of a beam in a laboratory using sequential images from a single camera. They applied the target matching technique to find the image coordinates of several points attached to the beam. Maas and Hampel [11] introduced a way of using spatial information and digital close-range photogrammetry to measure the geometric deformation of civil infrastructures, including pavement, bridges, and water reservoirs. Alemdar et al. [2] applied a photogrammetric technique to measure the vertical deformation and rotation of a reinforced concrete bridge column under dynamic loading, and they compared the results with those obtained from traditional instruments such as LVDT. The photogrammetry method performed very well in tracking lateral and vertical displacements, but they had limited success with rotations.

Some previous studies have observed the 3D deformations of bridges. Cunha et al. [12] implemented an optical 3D displacement measurement system and applied it to observe the structural dynamics of a long-span suspension bridge. Handayani et al. [13] applied photogrammetry to observe the 3D deformation of an entire bridge span using a nonmetric camera with self-calibration and space resection processing in a bundle adjustment technique. To measure 3D objects using photogrammetry, interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) of the camera are traditionally needed. These obtained the coordinated control points (CPs), which must be surveyed using a terrestrial instrument (e.g., total stations).

In this study, we propose a new method to determine the IOPs and EOPs using only a photogrammetric board without CP survey. The IOPs and EOPs are automatically determined with feature extraction methods. Therefore, the instantaneous deformation of a railroad bridge in three dimensions can be quickly measured by the 3D positioning of targets on the bridge using the stereo image sequences.

2. Method

When a train passes over a plate-type bridge, the repeated load of the train is transferred to the I-plate girders. Thus, a torsional stress may occur in the I-plate girder in the , , and directions. To measure this deformation, it is necessary to acquire stereo images using two cameras and to calculate the 3D positions of target points attached to the bridge over time. Figure 1 shows the detailed procedure of the proposed method to measure the 3D positions using stereo photogrammetry. This method can simultaneously measure the 3D displacements of many points on the bridge by a noncontact method. Figure 2 shows an illustration of the measurement.

In general, EOPs are determined by the CPs, which are located near the object being measured. The 3D positions of the CPs are acquired using an instrument such as a total station [8, 9]. The photogrammetric board used in this study decreases the time and cost required for surveying the CPs. As shown in Figure 2, the photogrammetric board is a portable reference panel composed of known 3D positions, which were made as points with constant intervals. The board is attached to the side of an I-type girder on a bridge to determine the IOPs and EOPs.

In this study, the IOPs consist of only four parameters for each camera: the focal length (), two locations of the principal point ( and ), and radial lens distortion parameter () in the CCD image plane ( for the left camera, for the right camera). The EOPs include six parameters for each camera: the location (three spatial coordinates) of the lens perspective center and the attitude (three rotation angles) of the lens in 3D space. Targets are also attached to the side of the bridge to measure deformation. The dynamic movements of these targets are traced by the stereo photogrammetric technique. We used a remote controller and two receivers to take photos at the same time from the two cameras and to reduce shaking of the cameras by manual control.

Feature extraction methods were applied to automatically obtain the image coordinates of the photogrammetric board points. The IOPs and EOPs were subsequently calculated using the image coordinates and the 3D spatial coordinates of the photogrammetric board points to the first image sequence. These parameters can be computed by space resection and least-squares method using the collinearity condition in (1) that refers to the alignment of the perspective center of the camera lens, the points on the photograph and the points in the object space that coincide with the bundle of rays [810]: where i = 1 or 2 (1 = left camera, 2 = right camera); and are the image coordinates of the targets; and are the coordinates of the principal point (image center); ; ; , , and are the spatial coordinates of the camera lens center from a user-defined local coordinate system; , , and are the spatial coordinates of targets on the structure; represents the focal lengths; and are components of the rotation matrix based on rotational angles (, , and ) with respect to the x-, y-, and z-axes, respectively.

Finally, the 3D spatial coordinates () of the targets in each set of stereo images can be computed by the space intersection [6]. This is achieved using the image coordinates of the corresponding points ( and ) of the left and right images with the IOPs and EOPs of the camera lens center ( and O2), as shown in Figure 3. The space intersection is based on the following principles. When identical points are found in the left and right images, two straight lines (or collinear lines) passing through and and through and must intersect at point , as shown in the figure [14].

3. Experiment and Results

The experiment was performed at the Suncheon River Railroad Bridge in South Korea. It is about 100 years old and was constructed in 1922. It is an I-plate girder bridge with a length of 154 meters (Table 1). It is not possible to measure the displacement of the entire span of the bridge or the span crossing the river. Therefore, the bridge span was chosen for testing so that a camera would be easy to install as shown in Figure 4.

There are two types of targets in the field test. Here, 21 targets were attached to the side of the bridge to determine the IOPs and EOPs using the conventional method and to measure the bridge deformation. The 3D spatial coordinates of these targets were determined using a total station and used as CPs. A photogrammetric board (90 cm × 50 cm) with a feature spacing of 10 cm was used to determine the IOPs and EOPs using the proposed method, as shown in Figure 4. Targets consisting of donut-shaped circles are composed of an inner circle with a diameter of 1 cm and an outer circle with a diameter of 3 cm.

The two digital charge-coupled device (CCD) cameras (NIKON D200) had dimensions of 23.6 mm × 15.8 mm (pixel size = 6 μm × 6 μm). The cameras have resolutions of 3872 × 2592 pixels and can record up to 6 frames per second (fps). The focal length of the camera, base line, and camera-to-object distances were 55 mm, 7 m, and 10 m from the bridge, respectively.

First, to assess vibrational impact while obtaining sequential photographs using the remote controller before the train entry on the rail bridge, we measured the differences in image coordinates of 18 CP targets in the nine sequential images, as shown in Figure 5. Second, to analyze camera shake due to ground vibration at the time when trains are entering the bridge, the image coordinates of the background point for the bridge were also measured in the sequential images (see Figure 6). This point should retain the same image coordinates in the sequential images because it is not connected to the bridge and thus is not affected by the vibrational effect of the train entering the bridge.

Table 2 shows the average discrepancy of the image coordinate in the nine sequence images. It almost occurred by the image matching process. Therefore, the influence of vibration in obtaining sequential photographs is very small. We also observed no change in the image coordinates owing to the ground vibration caused by the train, as shown in Figure 6.

The first stereo images were obtained before a passenger train entered the bridge span. Six sequential photographs were then acquired for one second using the two cameras, as shown in Figure 7. A remote controller was used to take the images at the same time and to prevent the camera from shaking (Figure 4). The 3D coordinate axes were set up as shown in Figure 2. To verify the accuracy of the proposed method, the bridge deformation was simultaneously measured using a Radian laser tracker with INNOVO™ technology [15] at almost the same point as one of the CP targets (Figure 4). This tracker can continuously measure the 3D behavior, and the accuracy is less than 0.5 mm. The laser tracker can record the reflector position in three axes.

The experiment was performed using two methods. The first is a conventional photogrammetric technique that measures the 3D coordinates of 21 CP targets with a total station. The image coordinates of the camera in the first frame were acquired from the left and right cameras, and subsequently the IOPs and EOPs were determined (case 1). Eleven of the 21 CPs were used for the determination of IOPs and EOPs. The remaining 10 points were used as check points to verify the accuracy of the determined IOPs and EOPs. The second case is the proposed technique, where the 36 points of the photogrammetric board are automatically detected in the first frame of the left and right cameras, and IOPs and EOPs are obtained (case 2). Twenty of the 36 points were used for the IOPs and EOPs. The remaining 16 points were used as check points to verify the accuracy of the determined IOPs and EOPs.

In the second case, we compared 10 automatic feature extraction methods to acquire the image coordinates of the center of the 36 circle points on the photogrammetric board: FAST, Harris, Shi and Tomasi, SURF, MSER, kp Harris (an improved version of Harris), BRISK, SUSAN, SIFT, and Moravec [1619]. The results are shown in Figure 8, which indicate that the MSER method is the best technique to detect only the circle center of the 36 points. Three to six feature points were obtained for 36 points on the photogrammetric board by the MSER method, and the final image coordinates of the obtained features were subsequently averaged. The coordinates have a standard deviation of 0.05–0.26 of pixel difference from the extracted features.

To verify the accuracy of the image coordinates acquired by the MSER, they were measured by manual reading and subsequently the results were compared. They showed reasonable measurements, that is, within 0.5 pixel root-mean-square error (Table 3); therefore, the MSER is feasible for extracting the center point in a circular form as shown in Figure 8.

Table 4 shows the results of the IOPs and EOPs and their standard deviations () for each case to evaluate the precision of the determined parameters. The EOPs are different for each case owing to the different methods used. As the reference data are different, the results of the IOPs are also different; however, the difference is insignificant. The precisions indicate that the parameters are overall well-determined in the solution of the least-squares method. To evaluate the actual accuracy of the determined parameters, the 3D coordinates of the check points for each case were computed using the space intersection equation [6]. The results were compared with the measured positions by the total station (case 1) and a certain feature space (case 2), as shown in Table 5. Case 1 and case 2 show a 2 mm and 1 mm root-mean-square error with respect to the x-, y-, and z-axes, respectively. Errors in case 2 are less than those in case 1; therefore, we confirmed that the IOPs and EOPs by case 2 would be better to measure the dynamic displacements of the bridge.

Next, for the left and right 6-frame images, target-image matching using the normalized correlation coefficient was performed with the first image as a reference, and the image coordinates of 21 targets were acquired for all frames. Then, the 3D coordinates of targets as a function of time were determined using the space intersection. Meanwhile, 3D axis displacements (, , and ) of a laser reflector point were measured by a laser tracker in order to verify the accuracy of the photogrammetric techniques, as shown in Figure 9.

With photogrammetric techniques of two cases when the second wagon was passing on a study span in the bridge, the 3D axis displacements of the closest CP target with the laser reflector were obtained and compared with those with the laser tracker (Figure 10, Table 6). In Figure 8, the laser tracker measurement shows almost no displacement in the and directions, whereas the displacement in the photogrammetry is about 2 mm. The difference between case 1 and case 2 is about 1 mm. In the direction, the displacement shapes are similar between the photogrammetry and laser tracker results and differ by about 1 mm. However, there was almost no difference between case 1 and case 2. Due to the differences between the coordinate axes of the laser tracker and the coordinate axes of photogrammetry, the displacement difference in the x- and z-axes increased. To compensate for the difference in the coordinate axes, the ranges were calculated as the Euclidean distance of the difference vector of the 3D axes and compared.

As shown in Table 6, the maximum difference between the proposed method and the laser measurement was within 3 mm in the 3D axes and about 1 mm in the range when measuring at a distance of about 10 m. The proposed method has a similar tendency to the deformation when compared to conventional photogrammetry.

Figure 11 shows the displacements measured by the proposed method for 21 CPs. Overall, 2 to 4 mm of sagging occurred in the direction in the I-type girder when the train entered the span. At the same time, torsion occurred at the x-axis and z-axis. In the results of the laser tracker, there is almost no displacement in the and directions. However, since this analysis is only for one point, it cannot represent the overall twist of the span. Thus, sag and torsion due to the cyclic loading of the train were confirmed to occur simultaneously in this bridge through the proposed method. It is possible to measure the deflection and torsional deformation of the I-type girder using the proposed method without depending on precise inspection by a trained professional.

4. Conclusion

Measurements of the deformation of railroad bridges with I-type girders have been dependent on the skill of the inspection worker. Most studies have mainly analyzed 2D deformations such as bridge deflection using 1D measurements of one position with a laser displacement measurement system or one camera. This study has proposed a photogrammetric method using a movable photogrammetric board. The technique enables efficient measurement of the deformations at a large number of points. We measured the overall 3D deformation of a span of a railroad bridge using two cameras of the same type. Compared to a conventional photogrammetric method, the 3D displacement of the bridge measured using the proposed method showed similar results. When compared to the value measured using a precise 3D laser tracker, the difference was about 1 mm in the range direction. Also, the repeated load of the train was confirmed to cause sagging and torsion in the I-type girder simultaneously.

The proposed method could make it possible to detect and manage the risk factors of a railroad bridge and prevent safety accidents caused by human inspection. In addition, this method could replace other tests that are inconvenient and costly. However, the proposed method has a problem in that the position and rotation of the camera can change slightly due to the local climate or external influences (wind, vibration, temperature, etc.). Therefore, avoiding long-term measurements is advantageous for reducing the error of displacement measurements. These effects were addressed using a fixed frame of reference in the object space [11]. We plan to conduct future experiments to consider this fixed reference.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by a grant (18RDRP-B076564-05) from the Regional Development Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.