Advances in Civil Engineering

Advances in Civil Engineering / 2020 / Article
!A Corrigendum for this article has been published. To view the article details, please click the ‘Corrigendum’ tab above.

Research Article | Open Access

Volume 2020 |Article ID 2815017 | https://doi.org/10.1155/2020/2815017

Guojun Deng, Zhixiang Zhou, Xi Chu, Shuai Shao, "Identification of Behavioral Features of Bridge Structure Based on Static Image Sequences", Advances in Civil Engineering, vol. 2020, Article ID 2815017, 16 pages, 2020. https://doi.org/10.1155/2020/2815017

Identification of Behavioral Features of Bridge Structure Based on Static Image Sequences

Academic Editor: Venu G. M. Annamdas
Received10 Aug 2019
Revised13 Jul 2020
Accepted30 Jul 2020
Published20 Aug 2020

Abstract

This paper aims to further enhance the accuracy and efficiency of large bridge structural health monitoring (SHM) through noncontact remote sensing (NRS). For these purposes, the authors put forward an intelligent NRS method that collects the holographic geometric deformation of the test bridge, using the static image sequences. Specifically, a uniaxial automatic cruise acquisition device was designed to collect the dynamic and static images on bridge facade under different damage conditions. Considering the strong spatiotemporal correlations of the sequence data, the relationships between the time history images in six fixed fields of view were identified through deep learning under spatiotemporal sequences. On this basis, the behavioral features of the bridge structure were obtained under vehicle load. Finally, the global holographic deformation of the test bridge and the envelope spectrum of the global holographic deformation were derived from the deformation data. The research results show that the output data of our NRS method were basically consistent with the finite-element prediction (maximum error: 11.11%) and dial gauge measurement (maximum error: 12.12%); the NRS method is highly sensitive to the actual deformation of the bridge structure under different damage conditions and can capture the deformation in a continuous and accurate manner. Compared with the limited number of measuring points, holographic deformation data also shows higher sensitivity in damage identification.

1. Introduction

With the elapse of time, it is inevitable for a bridge to face structural degradation under the long-term effects of natural factors (e.g., climate and environment). In extreme cases, the bridge structure will suffer from catastrophic damage if the traffic volume and the number of heavy vehicles continue to grow due to the booming economy [1]. The traditional approach of structural management, mainly manual periodic inspection, can no longer satisfy the demand of modern transport facilities. The traditional approach is inefficient, uncertain, and highly subjective, lacking scientific or quantitative bases [25].

Since the 1990s, structural health monitoring (SHM) systems have been set up on important large-span bridges across the globe. The main functions are to monitor the state and behavior of the bridge structure, while tracking and recording the environmental conditions. On the upside, these systems have high local accuracy, run on an intelligent system, and support long-term continuous observation. On the downside, these systems are too costly to construct, the sensors cannot be calibrated periodically, and the layout of monitoring points is limited by local terrain and structure type. The geometric deformation of the bridge structure can only be collected by a few discrete monitoring points, making it difficult to characterize the local or global holographic geometry of bridge safety [610].

In recent years, great progress has been made in fields like communication, material science, remote sensing, machine vision measurement, data analysis, and artificial intelligence (AI). The advancement in these fields provides the theories and techniques needed to develop more direct, economic, and quantifiable SHM methods, shedding new light on full-lifecycle SHM. For example, many scholars have acquired structural monitoring data through machine vision measurement and diagnosed structural diseases by analyzing these data with the AI [1114].

Numerous algorithms and software systems have been developed for the SHM applications of machine vision measurement, disease image recognition, and feature extraction, namely, automatic threshold segmentation, spatial edge detection, crack detection based on artificial neural network (ANN), and wavelet-based crack detection [1521]. For instance, Harrington et al. [16] analyzed the ground penetrating radar (GPR) images of asphalt pavement with convolutional neural network (CNN), thereby completing the identification, positioning, measurement, and 3D reconstruction of reflection cracks. Aimin et al. [17] employed the CNN and image data analysis to identify and measure the pavement diseases. Li et al. [18] conducted data mining and analysis of the massive data collected by the engineering structure monitoring system, using the latest results of applied mathematics, information technology, and the AI. Tianshu et al. [19] adopted an improved ANN algorithm to predict and quantify the full-lifecycle SHM data. Ying et al. [20] identified the mechanical parameters of structures by noncontact measurement technology. Xiaomei et al. [21] used fractional-order total variation theory to deblur machine vision measurement images. However, the above algorithms require strong computing power, need complex operations, and remain vulnerable to light and shadow. Structural displacement monitoring based on computer vision has been applied in many tasks of bridge health monitoring, such as bridge deflection measurement [22], bridge alignment [23], bearing capacity evaluation [24], finite-element model calibration [25], modal analysis [26], damage identification [27], cable force detection [28], and dynamic weighing assistance [29].

Based on the above applications of the AI in structural monitoring, there are six problems with the machine vision measurement and structural safety interpretation of bridge diseases, which are nonlinear detection targets [15, 3038]: (1) The learning and identification require a huge amount of labelled data and rely heavily on the reference dataset, owing to the complex, variable background of the structure, the heavy presence of speckle noises, and the low signal-to-noise ratios (SNRs) of bridge and targets. (2) The contrast between targets and environment is rather low. (3) The target pixels have poor spatial continuity. (4) The mapping relationship between mechanical parameters and structural safety states is complicated, under the influence of complex, uncertain factors. (5) The network models that interpret and learn structural states are too sensitive to machine vision bias, leading to a high error and even mistake. (6) The samples for model training and test are randomly extracted from a limited number of long-term SHM data, and such a standard paradigm is not desirable. That is why real-time holographic information of bridge structure cannot be obtained by general methods designed to identify mechanical parameters and extract disease features of bridge structure. These methods can only acquire information on a part of the structure.

In the light of the above, our research team carried out load tests on the reduced-scale model of a super long span self-anchored suspension bridge and obtained the spatiotemporal sequences’ holographic images of the test bridge structure under multiple damage conditions by fixed axis rotation. Next, the behavioral features of the bridge structure were recognized through deep learning of the spatiotemporal sequences of these images, revealing the actual deformations of the structure under load. Considering the defect of the contact sensors in traditional SHM systems (i.e., the observed data are discrete due to the limited number of sensors), the noncontact remote sensing (NRS) was adopted to fully capture the exact global damage and local damage of the structure and meet the engineering requirement on measurement accuracy and low cost.

2. System Composition Principle

2.1. Intelligent NRS System

This paper designs an intelligent NRS system for holographic monitoring of bridge structure based on virtual pixel sensors and several cutting-edge techniques (i.e., modern panoramic vision sensing, pattern recognition, and computer technology). As shown in Figure 1, this intelligent NRS system mainly consists of an active image acquisition device, an automatic cruise remote control platform, an environmental monitoring unit, a signal transmission unit, and a data storage and analysis unit.

To monitor the holographic geometry of the bridge structure, the automatic cruise parameters (preset position, watch position, cruise time, and sampling time) are configured by computer to remotely control the active image acquisition device and the environmental monitoring unit. In this way, the dynamic and static images of the bridge structure can be captured in the current field of view. Figure 2 is a photo of the intelligent NRS system for our load tests on the reduced-scale model of a super long span self-anchored suspension bridge.

In this paper, the static image of the bridge facade obtained by a high-definition camera is mainly analyzed. To achieve the desired accuracy, the acquisition view of the whole bridge is divided according to the single factor scaling method, as shown in Figure 3. There is a 20%∼30% overlap between the field of view and another. Take pictures from left to right or from right to left in turn and then continuously patrol to collect images.

During the test, the digital image of the specimen is also collected using Canon EOS 5Ds R low-cost digital camera; the camera and lens parameters are shown in Table 1.


Number of pixelsSize of sensorData interfaceAspect ratioPhoto-sensorsImage amplitudePixel size (μm)Lens typeFocal length (mm)Lens relative aperture

50.6 million36 × 24 mmUSB 3.03 : 2CMOS8688 × 57924.14EF 24–70 mm f/2.8 L50F2.8–F22

The fields of view are equivalent to fixed points. Each monitors an area of fixed size. During the monitoring, the intelligent NRS system cruised continuously in the same field of view. In this way, the time dimension was added to the 2D images, forming 3D data. Then, it is possible to extract the spatiotemporal features of the data using 3D convolutional kernels. Hence, the author created a 3D CNN based on 3D convolutional feature extractor. The 3D CNN was adopted to generate multichannel information from continuous images, convolute, and downsample 7 time-continuous static contour images in each channel and integrate the information of all channels to characterize the behavioral features of the bridge structure.

The 3D CNN is compared with 2D CNN in Figure 4. The 2D CNN conducts 2D convolution of the image data (n-channel, n-frames, , h) in each channel, using n-channel kernels of the size (k, k), and then adds up the n results. By contrast, the 3D CNN convolutes each volume (n-frames, , h) of the image data (n-channel, n-frames, , h), using n-channel kernels of the size (d, k, k), and then adds up the n-channel volumes. The 2D CNN can only extract the features of each frame, that is, the spatial features, while the 3D CNN can also extract the temporal features through 3D convolution and pooling. Therefore, 2D CNN always outputs an image, whether it is applied to analyze an image on bridge structure or a video/temporal image sequence (with multiple frames and channels) on bridge structure. Meanwhile, 3D CNN can obtain the temporal features of the input signals if it is adopted to analyze a video on bridge structure [39].

Figure 5 is to explain the training of the 3D CNN deep learning model on how to extract the structural deformation features. As shown in Figure 5, the model mainly consists of a hardwired kernel layer, 3 convolution layers, and 1 fully connected layer.

With the 3D CNN (Figure 5), five different features were obtained by the hardwired kernels: grayscale, x-direction gradient, y-direction gradient, x-direction optical flow, and y-direction optical flow. The gradient and optical flow characterize the edge distribution of the image and the motion trend of object. The two parameters were extracted by 3D CNN to identify the behavioral features of the bridge structure.

Unlike 2D CNN, 3D CNN introduces the optical flow fields in the horizontal and vertical fields. The optical flow represents the change of the image and contains the information on target motion, which enables the observer to judge the motion state of the target. From the definition of optical flow, the concept of optical flow field can be drawn as a 3D instantaneous velocity field covering all the pixels in the image. The 2D velocity vector is the projection of the 3D velocity vector of the visible point in the scene on the imaging surface. Therefore, the optical flow contains not only the motion information of the object but also rich information about the 3D structure of the scene.

The algorithm using optical flow must satisfy two hypotheses: (1) constant brightness, where brightness of the same point does not change with time. (2) Small motion: the position will not change drastically with the elapse of time, such as to find the derivative of grayscale relative to position. In our research, both hypotheses are satisfied by the data collected by the NRS. The brightness of each point in the collected images remained constant, because of the small data interval of the seven time sequences; compared with the monitoring objects in each field of view, the magnitude of structural deformation belongs to small deformation.

Optical flow is the instantaneous velocity of a point on a spatial object moving on the imaging plane. Considering the similarity between 2D velocity field and luminance field, Horn and Schunck [40] introduced the optical flow constraint equation and proposed the basic method of optical flow calculation (see equation (1)):where and are the spatial change rates of the brightness of the image point (x, y) in the x- and y-directions, respectively; u and are the components of the optical flow of the point in the x- and y-directions, respectively; is the temporal change rate of the brightness of the point. The object motion between adjacent frames can be derived from the spatiotemporal variation of pixels in the image sequence.

Similarly, an additional dimension should be added to the activation function of 3D CNN kernel. Based on the deformation on the x-y plane, the optical flow weight, denoted as z, was introduced, and tanh was selected as its activation function. Thus, the 3D CNN can be transformed as equation (2):

In summary, the workflow of the intelligent NRS system is explained in Figure 6.

2.2. Characteristics of Holographic Deformation Applied in Damage Identification

According to the technical methods mentioned in 2.1, the holographic deformation shape of the main beam of the test bridge can be obtained. Compared with the traditional contact measurement or the measurement of a limited number of measuring points, the holographic deformation information is more abundant, which can provide more real and rich data for damage identification or subsequent machine learning. In this paper, the curvature difference between damaged and nondestructive conditions is taken as an example to show the more important significance of structural deformation information for damage identification.

The deformation shape cannot quantify damage location and damage degree easily and is rarely employed directly in damage identification. However, the curvature has a satisfactory effect on damage identification. The displacement of each node is first calculated by taking the curvature as an example; then the curvature matrix is obtained through difference approximation.where represents displacement value, i is the ith measurement points, and is the distance between two adjacent measuring points i − 1 and i. The modal curvature difference before and after damage iswhere and represent the matrix of the curvature before and after damage, respectively, obtained through difference calculation.

Structural damage identification can be realized by using the property of the maximum value of the curvature change vector, and the degree of damage can be reflected by its size.

3. Test Overview

Based on the previous research of our research team [4143], a 1 : 30 model was constructed for Taohuayu Yellow River Bridge. Besides, 52 concrete deck slabs (1.16 m ×0.45 m × 0.2 m) were prepared and laid on the steel box girder to simulate vehicle driving on the bridge and serve as the counterweight. Figure 7 is a photo of the reduced-scale model.

The intelligent NRS system was set up at 5 m away from the bridge facade. Then, a computer-controlled camera was rotated by fixed angles to collect the images on specific sections of the bridge from fixed positions. The layout of the lab and the principle of image collection are shown in Figure 8.

To verify the feasibility of the image collection method, 11 dial gauges were arranged along the axis of the bridge to capture the shape change, while the camera took photos of the bridge. DH5902N is adopted for data acquisition equipment. The arrangement of the dial gauges is displayed in Figure 9 below.

The structural deformation data of the bridge were collected under two scenarios to capture more behavioral features with the intelligent NRS system and provide more samples for deep learning. In the first scenario, the bridge had no damage, the test vehicle drove at the speed of a normal vehicle across the bridge, and the data on the structural change was collected. In the second scenario, different suspension cables were damaged to simulate varied degrees of bridge damages at different positions, the test vehicle drove at the speed of a normal vehicle across the bridge, and the data on the structural change was collected. Table 2 lists the position and number of damaged suspension cables. The serial number of suspension cables is given in Figure 10.


Serial numberDamage conditionsData collection method
PositionNumberTraditional methodVisual method

100Dial gaugesIntelligent NRS system
2242Dial gaugesIntelligent NRS system
323, 244Dial gaugesIntelligent NRS system
422, 23, 246Dial gaugesIntelligent NRS system
521, 22, 23, 248Dial gaugesIntelligent NRS system
620, 21, 22, 23, 2410Dial gaugesIntelligent NRS system

4. Image Data Acquisition and Preprocessing

Considering the advantages of deep learning in feature extraction over traditional machine learning, this paper obtains the structural deformations of the bridge and the behavioral features of the bridge structure in the same field of view and different time sequences through the deep learning of the grayscales and contours in the images collected by the intelligent NRS system.

4.1. Contour Extraction

The images collected by the intelligent NRS system contain the time sequences in a fixed field of view. Hence, the grayscales and contours were extracted from six images by Matlab edge function [44].

The Canny edge detector was adopted for the extraction process. This operator finds the edge points in four steps: smoothing the images with a Gaussian filter, computing the gradient amplitude and direction through finite-difference computing with first-order derivative, applying nonmaximum suppression to gradient amplitude; and using double threshold to detect and connect the edges.

The Canny edge detector can effectively extract the contours of the bridge structure from the static images collected by the intelligent NRS system. The extracted contours were further processed by graphics processing software to decontextualize the contours of the useless parts, leaving only the lower edge contour of the deck slabs to reflect the variation in structural shape.

Since the fields of view in the six images are fixed, the contours of the bridge structure were located by the following method. The six images containing the initial boundary of the bridge structure were taken as the original images. The coordinates of each pixel in the boundary were extracted from the six images. Based on these coordinates, each pixel was marked in the original images, revealing the position of the initial boundary. The manual marking helps to suppress the noises in the images. In the subsequent deep learning, the contours can be automatically tracked based on the marked pixels, revealing the change features of the bridge structure. The specific flow of the denoising and marking is shown in Figure 11.

Because the scale ratio of the test bridge to the actual bridge is 1 : 30 in the longitudinal direction, the test vehicle was accelerated to 0.74 m/s∼0.93 m/s by a traction motor, about 1 : 30 of the speed of normal vehicles (80 km/h∼100 km/h). The test vehicle drove back and forth on the bridge. Meanwhile, each camera responsible for six fields of view (overlap ratio: 20∼30%) cruised seven times under each damage condition. In other words, seven sets of images were collected on the vehicle in the same field of view under each damage condition.

4.2. Dataset Construction based on Spatiotemporal Static Image Sequences

To realize the holographic monitoring of bridge structure with the uniaxial automatic cruise acquisition device, the key lies in setting up the global and local holographic data based on the dynamic and static image sequences, which were captured at different times from multiple angles and fields of view.

The data in static image sequences have four main features: multitime, multifield of view, multiangle, and strong correlation between time and space. First, the holographic data collected in different fields of view differed in time history; second, based on technical and economic considerations, the local details of the bridge structure were monitored with a few devices in different fields of view, yielding the local holographic data in each field of view; third, the data were collected by the automatic cruise device at different watch positions, and the resulting angle difference should be adaptively equalized in data processing; fourth, the spatiotemporal features of the original data were determined by the random impact of the entire bridge at the current moment or period, and the structural response in local field of view reflects the overall state of the whole structure to different degrees.

The load on the test bridge is generated by the moving test vehicle. Since the load is not static, the images from the six fields of view were not captured at the same moment and not suitable for stitching. Thus, the behavioral features of the bridge structure in all fields of view were obtained from each field of view and then summed up through deep learning.

During data acquisition, the time pointer and the space pointer were constructed based on the features of the intelligent NRS system and the image sequences. The former (time dimension) indicates the current damage condition and field of view, and the latter (spatial dimension) reflects the position of the current local area relative to the global structure. The spatiotemporal features of the data sequences in the static images are presented in Figure 12 below.

Because of the data features, the temporal information and spatial information were added to the dataset as labels before deep learning. The temporal information indicates variation in damage condition and the order of images in the same field of view, and the spatial information reflects the correlation between a local structure with the global structure in a field of view. On this basis, the temporal, spatial, and angular data were constructed for the original data and then integrated with environmental data (i.e., temperature, humidity, and illumination). The labels can be expressed aswhere i is the serial number of damage conditions of the test bridge (i = 1∼6); j is the label position under different damage conditions (1 for Time, 2 for Space, 3 for Angle…); m is the invocation parameter of the data on labels Time, Space, Angle, and Environment in a local field of view; n is the serial number of measurements under the same damage condition, that is, the time history of the same damage condition in the same field of view; Time_label, Space_label, Angle_label, and Env._label are the matrices of labels Time, Space, Angle, and Environment, respectively.

5. Extraction and Discussion of Behavioral Features of Bridge Structure

The Matlab edge function and the Canny edge detector were adopted to extract the grayscale and contour from each image in the static image sequences of the test bridge under different damage conditions. Based on the extracted feature, the contours were marked on the original images. Then, the marked images and the subsequently taken images were compiled into a dataset. After that, the 3D CNN was applied to the deep learning and identification of behavioral features of the bridge structure, based on the images collected by the NRS method (our method).

In the machine vision displacement measurement, because of the limited measuring points, even if the computing power of the equipment itself is limited, it can also achieve real-time measurement, and the delay can be controlled within 1 second. However, compared with the method proposed in this paper, although the number of measuring points has increased hundreds of times, all of them are processed by high-performance computer (CPU is intel i7-7700k, GPU is NVIDIA GTX 1070, and ram is 32 GB). The images collected under the six working conditions described in this paper are training within 10 minutes. It can also be seen that, with the increase of damage, the deformation of the main beam increases, which means that the search range of each pixel increases, and the training time increases nonlinearly. The shortest training time of condition 1 is 6 minutes 13 seconds, and the training time of condition 6 is 9 minutes 27 seconds.

The deformation curves of the bridge structure in different fields of views are displayed in Figure 13. The theoretical deformation curves predicted by the finite-element method were also plotted (Figure 13).

As shown in Figure 13, the deformation curves of our method were less smooth than those of the finite-element method. There are two possible reasons for the lack of smoothness. Firstly, the lower edge of deck slabs marked in the original images, which was considered as the contour of the bridge structure, is not smooth and even discrete in some places. Secondly, the positions of the marked pixels changed greatly after the bridge deformed and were not captured accurately through deep learning.

The first problem was solved through contour stacking analysis on structural deformation monitoring, a method previously developed by our research team. This method treats the initial contours as known white noises of the system and subtracts them from the contours acquired under different damage conditions. The second problem calls for improvement of the capture algorithm. Here, the improvement is realized through manual intervention. In this way, the bridge deformation data in the six fields of view were integrated into the global holographic deformation of the test bridge (Figure 14), and the envelope spectrum of the global holographic deformation was obtained based on all the deformation data (Figure 15).

The deformation map of the test bridge based on the 11 dial gauges is not presented here. Even if fitted, the data collected by these gauges were discrete to demonstrate the global deformation features of the test bridge. Moreover, the initial state of the test bridge was not measured at the completion, making it impossible to know the actual stress state of the bridge structure at that moment. However, the relative deformation of the test bridge in the monitoring period can be obtained from Figure 14. The obtained results were compared with the relative deformation recorded by the dial gauges, with the aim to verify the accuracy of our method.

Out of the many damage conditions, the greatest difference lies between damage condition 1 (no damage) and damage condition 6 (suspension cables 20∼24 are damaged). Thus, these two damage conditions were subjected to stacking analysis and compared in detail (Table 3).


No.Deformation of stacking analysis (mm)Measured deviation (%)Relative error (%)
Dial gauge measurement R1Finite-element method R2Noncontact remote sensing R3

10.1100.19.09
20.9911.089.098.00
31.561.551.687.698.39
45.425.555.878.305.77
517.4617.3218.757.398.26
615.1615.316.438.387.39
75.185.245.679.468.21
80.930.961.029.686.25
90.370.380.4110.817.89
100.350.330.375.7112.12
110.0900.0811.11

Table 3 shows that our NRS method accurately derives the deformation features of the bridge structure from those collected in local fields of view. Compared with the dial gauge measurement and finite-element results, the maximum errors of our method were 11.11% on the 11th measuring point and 12.12% on the 12th measuring point. This means the global holographic deformation curves obtained through decking analysis of contours are accurate enough for engineering practices.

Comparing Figure 14 and Table 3, the maximum deformation in Table 3 was recorded at the 5th measuring point, but the most severely deformed place actually appeared in the 10.7 m long section between the 5th and 6th measuring points. It can also be seen from Figure 14 that the deformed positions of the bridge structure continued to change with the damage conditions. It is difficult to rationalize the layout of measuring points for traditional methods like dial gauge measurement. After all, a single, fixed layout cannot capture the deformation features of the bridge structure under the stiffness variation at different positions. Our NRS method greatly outperformed dial gauge measurement in that it captured the pixel-level variation at any position of the bridge structure and met the engineering requirement on measurement accuracy.

Through the method proposed in this paper, more abundant deformation information can be obtained under different working conditions. Although the damage can be recognized by comparison, it is impossible to qualitatively analyze the location and degree of damage. The curvature difference described in Section 2.2 can further reflect the advantage of holographic contour information in damage identification. According to the theory described in Section 2.2, the curvature of each measuring point is obtained through the dial gauges and the method proposed in this paper. The curvature difference under each damage condition is obtained by comparing the curvature under each damage condition with that under undamaged condition, as shown in Figure 16. It can be seen from Figure 16(a) that the curvature difference obtained by dial gauges or limited measuring points is difficult to find out the change rule and identify the damage under different damage conditions. As can be seen in Figure 16(b), more holographic structural deformation information can be obtained by the method proposed in this paper, and, according to that, the curvature difference under each damage condition can be one-to-one corresponding to the change of damage location, and the change degree is also corresponding to the change of damage degree. However, due to the complexity of the suspension bridge structure, it is necessary to do further research to quantify the damage degree.

6. Conclusions

In this paper, the geometric deformation of a reduced-scale model for a 24 m span self-anchored suspension bridge under multiple damage conditions was captured with the NRS method, and the global behavioral features of the test bridge were identified using the 3D CNN algorithm. The results were compared with finite-element prediction and dial gauge measurement. The main conclusions are as follows:(1)A fixed point uniaxial automatic cruise acquisition device was designed to collect the dynamic and static images on bridge facade under different damage conditions. Then, the spatiotemporal sequences of static images were processed by Matlab edge function, Canny edge detector, and 3D CNN. In this way, the global holographic deformation of the test bridge under different damage conditions was obtained, shedding new light on the development of an economic, efficient, and direct SHM technology.(2)In our previous research, the NRS was also applied to acquire the global holographic deformation of the bridge structure. But the previous method needs to stitch the bridge images into a panorama. This problem has been overcome in this research. The bridge image sequences taken in fixed fields of view were learned and identified by 3D CNN, outputting the global holographic deformation data of the test bridge. The output data were basically consistent with the finite-element prediction and dial gauge measurement. The global holographic deformation curves of the test bridge exhibited similar trends under different damage conditions, with an error less than 12%. This means our method satisfies the engineering requirement on measurement accuracy.(3)The bridge deformation was also measured at multiple points with dial gauges. However, the measured data could not reflect the actual deformation features of the bridge structure under different damage conditions. The abnormal data on damage-induced local deformations were mostly lost. By contrast, the decking analysis on the lower edge contours of deck plates managed to produce the global holographic deformation data, laying a solid basis for damage identification.(4)The deflection of the main beam of the test bridge under multiple damage conditions obtained by dial gauges and our method is taken as the basic data, and the deflection curvature difference is used to identify the damage. It can be seen that the damage location and damage degree cannot be reflected only through a limited number of measuring points. The holographic deformation information can be obtained by our method, which can identify the change of damage location and damage degree.(5)This paper is the first attempt to test the effect of the intelligent NRS method on bridge deformation monitoring. The authors have not investigated the automatic feedback adjustment of parameters in the mathematical network model and the optimization of the activation function. These two issues will be tackled in further analysis. The other issues to be studied include manual interventions (e.g., feature engineering and labelling) in the dataset of holographic deformation samples based on the original dynamic and static images, and the construction of intelligent sensing network. At the same time, the effect of damage identification is only a preliminary attempt. For damage identification method is more suitable for the intelligent NRs, further research is needed.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was funded by the National Natural Science Foundation of China (Grant no. 51778094), the National Science Foundation for Distinguished Young Scholars of China (Grants nos. 51608080 and 51708068).

References

  1. Editorial Department of China Journal of Highway and Transport, “Review on China’s bridge engineering research: 2014,” China Journal of Highway and Transport, vol. 27, no. 5, pp. 1–96, 2014. View at: Google Scholar
  2. S.-H. He, X.-m. Zhao, M. A. Jian et al., “Review of highway bridge inspection and condition assessment,” China Journal of Highway and Transport, vol. 30, no. 11, pp. 63–80, 2017. View at: Google Scholar
  3. N.-D. Hoang, “Detection of surface crack in building structures using image processing technique with an improved otsu method for image thresholding,” Advances in Civil Engineering, vol. 2018, Article ID 3924120, 10 pages, 2018. View at: Publisher Site | Google Scholar
  4. L. I. Hui, B. A. O. Yue-quan, L. I. Shun-long et al., “Data science and engineering structural health monitoring,” Engineering Mechanics, vol. 32, no. 8, pp. 1–7, 2015. View at: Google Scholar
  5. Y. Bao, J. L. Beck, and L. Hui, “Compressive sampling for accelerometer signals in structural health monitoring,” Structural Health Monitoring-An International Journal, vol. 10, no. 3, pp. 235–246, 2011. View at: Publisher Site | Google Scholar
  6. Y. Bao, H. Li, and J. Ou, “Emerging data technology in structural health monitoring: compressive sensing technology,” Journal of Civil Structural Health Monitoring, vol. 4, no. 2, pp. 77–90, 2014. View at: Publisher Site | Google Scholar
  7. Y. Bao, H. Li, X. Sun, Y. Yu, and J. Ou, “Compressive sampling-based data loss recovery for wireless sensor networks used in civil structural health monitoring,” Structural Health Monitoring: An International Journal, vol. 12, no. 1, pp. 78–95, 2013. View at: Publisher Site | Google Scholar
  8. Y. Bao, Z. Zou, and L. Hui, Compressive Sensing Based Wireless Sensor for Structural Health Monitoring, SPIE Smart Structures/NDE, San Diego, CA, USA, 2014.
  9. Y. Bao, Y. Yu, H. Li et al., “Compressive sensing-based lost data recovery of fast-moving wireless sensing for structural health monitoring,” Structural Control and Health Monitoring, vol. 22, no. 3, pp. 433–448, 2014. View at: Publisher Site | Google Scholar
  10. G. Michel Guzman-Acevedo, G. Esteban Vazquez-Becerra, J. R. Millan-Almaraz et al., “GPS, accelerometer, and smartphone fused smart sensor for SHM on real-scale bridges,” Advances in Civil Engineering, vol. 2019, Article ID 6429430, 15 pages, 2019. View at: Publisher Site | Google Scholar
  11. X. Lei, Y. Jin, J. Guo, and C. a. Zhu, “Vibration extraction based on fast NCC algorithm and high-speed camera,” Applied Optics, vol. 54, no. 27, pp. 8198–8206, 2015. View at: Publisher Site | Google Scholar
  12. X. Lei, J. Guo, and C. a. Zhu, “Vision-based faint vibration extraction using singular value decomposition,” Mathematical Problems in Engineering, vol. 2015, Article ID 306865, 14 pages, 2015. View at: Publisher Site | Google Scholar
  13. N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Transactions on Graphics, vol. 32, no. 4, pp. 1–10, 2013. View at: Publisher Site | Google Scholar
  14. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at: Publisher Site | Google Scholar
  15. Y. Fukuda, M. Q. Feng, Y. Narita, S. i. Kaneko, and T. Tanaka, “Vision-based displacement sensor for monitoring dynamic response using robust object search algorithm,” IEEE Sensors Journal, vol. 13, no. 12, pp. 4725–4732, 2013. View at: Publisher Site | Google Scholar
  16. E. A. Harrington, D. Bebbington, J. Moore et al., “VX-680, a potent and selective small-molecule inhibitor of the Aurora kinases, suppresses tumor growth in vivo,” Nature Medicine, vol. 10, no. 3, pp. 262–267, 2004. View at: Publisher Site | Google Scholar
  17. S. Ai-min, T. Zheng, and G. Jie, “Recognition and measurement of pavement disasters based on convolutional neural networks,” China Journal of Highway and Transport, vol. 31, no. 1, pp. 1–10, 2018. View at: Google Scholar
  18. H. Li, F. Zhang, and Y. Jin, “Real-time identification of time-varying tension in stay cables by monitoring cable transversal acceleration,” Structural Control and Health Monitoring, vol. 21, no. 7, pp. 1100–1117, 2014. View at: Publisher Site | Google Scholar
  19. W. Tian-shu, C. Shu-yu, and W. Peng, “Research on the life cycle health monitoring and diagnosis system,” Chinese Journal of Scientific Instrument, vol. 39, no. 8, pp. 204–211, 2018. View at: Google Scholar
  20. Z. Ying, Z. Lixun, L. Tong et al., “Structural system identification based on computer vision,” China Civil Engineering Journal, vol. 51, no. 11, pp. 17–23, 2018. View at: Google Scholar
  21. Y. Xiao-mei, X. Yu-qing, L. Ya-nan et al., “Image deblurring method with fractional-order total variation and adaptive regularization parameter,” Advanced Engineering Sciences, vol. 50, no. 6, pp. 1–7, 2018. View at: Google Scholar
  22. T. Khuc and F. N. Catbas, “Computer vision-based displacement and vibration monitoring without using physical target on structures,” Structure and Infrastructure Engineering, vol. 13, no. 4, pp. 505–516, 2016. View at: Publisher Site | Google Scholar
  23. L. Tian and B. Pan, “Remote bridge deflection measurement using an advanced video deflectometer and actively illuminated LED targets,” Sensors, vol. 16, no. 9, p. 1344, 2016. View at: Publisher Site | Google Scholar
  24. J. J. Lee, “Evaluation of bridge load carrying capacity based on dynamic displacement measurement using real-time image processing techniques,” International Journal of Steel Structures, vol. 6, no. 5, pp. 377–385, 2006. View at: Google Scholar
  25. D. Feng and M. Q. Feng, “Model updating of railway bridge using in situ dynamic displacement measurement under trainloads,” Journal of Bridge Engineering, vol. 20, no. 12, Article ID 04015019, 2015. View at: Publisher Site | Google Scholar
  26. J. G. Chen, T. M. Adams, H. Sun, E. S. Bell, and O. Büyüköztürk, “Camera-based vibration measurement of the world war I memorial bridge in portsmouth, new hampshire,” Journal of Structural Engineering, vol. 144, no. 11, Article ID 04018207, 2018. View at: Publisher Site | Google Scholar
  27. T. Khuc and F. N. Catbas, “Structural identification using computer vision–based bridge health monitoring,” Journal of Structural Engineering, vol. 144, no. 2, Article ID 04017202, 2018. View at: Publisher Site | Google Scholar
  28. D. Feng, T. Scarangello, M. Q. Feng, and Q. Ye, “Cable tension force estimate using novel noncontact vision-based sensor,” Measurement, vol. 99, pp. 44–52, 2017. View at: Publisher Site | Google Scholar
  29. T. Ojio, C. H. Carey, E. J. OBrien, C. Doherty, and S. E. Taylor, “Contactless bridge weigh-in-motion,” Journal of Bridge Engineering, vol. 21, no. 7, Article ID 04016032, 2016. View at: Publisher Site | Google Scholar
  30. S. Li, X. Yang, L. Hui et al., “Uniform and pitting corrosion modeling for high-strength bridge wires,” ASCE Journal of Bridge Engineering, vol. 19, no. 7, pp. 1–8, 2014. View at: Publisher Site | Google Scholar
  31. S. Li, H. Li, Y. Liu, C. Lan, W. Zhou, and J. Ou, “SMC structural health monitoring benchmark problem using monitored data from an actual cable-stayed bridge,” Structural Control and Health Monitoring, vol. 21, no. 2, pp. 156–172, 2014. View at: Publisher Site | Google Scholar
  32. P. Kohut, K. Holak, A. Martowicz, and T. Uhl, “Experimental assessment of rectification algorithm in vision-based deflection measurement system,” Nondestructive Testing and Evaluation, vol. 32, no. 2, pp. 200–226, 2016. View at: Publisher Site | Google Scholar
  33. H. Yoon, H. Elanwar, H. Choi, M. Golparvar-Fard, and B. F. Spencer, “Target-free approach for vision-based structural system identification using consumer-grade cameras,” Structural Control and Health Monitoring, vol. 23, no. 12, pp. 1405–1416, 2016. View at: Publisher Site | Google Scholar
  34. X. Yang, W. Ren, J. Zhang et al., “Extreme-value estimation of dynamic strain of bridge in service based on long-term monitoring,” Advanced Engineering Sciences, vol. 30, no. 12, pp. 97–102, 2017. View at: Google Scholar
  35. S.-b. Zhang and C.-g. Liu, “Experiment on dynamic response of pile group cable-stayed bridge tower foundation subjected to complex environment loads,” China Journal of Highway and Transport, vol. 30, no. 12, pp. 250–257, 2017. View at: Google Scholar
  36. X. Fan, Y. Liu, and D. Lu, “Improved Gaussian mixed particle filter dynamic prediction of bridge monitored extreme stress,” Journal of Tongji University (Natural Science), vol. 44, no. 11, pp. 1660–1666, 2016. View at: Google Scholar
  37. B.-y. Jia, X.-l. Yu, and Q.-s. Yan, “Method of bridge condition assessment based on discrete dynamic bayesian networks,” Bridge Construction, vol. 46, no. 3, pp. 74–79, 2016. View at: Google Scholar
  38. A. Graves, A. R. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in Proceedings of the IEEE International Conference on Acoustics, Vancouver, Canada, 2013. View at: Google Scholar
  39. S. Ji, M. W. Xu, and K. Yu, “3D convolutional neural networks for human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 221–231, 2013. View at: Publisher Site | Google Scholar
  40. B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, no. 1–3, pp. 185–203, 1981. View at: Publisher Site | Google Scholar
  41. W. Shao-rui, Z. Zhi-xiang, G Yan-mei et al., “Newton-raphson algorithm for pre-offsetting of cable saddle on suspension bridge,” China Journal of Highway and Transport, vol. 29, no. 1, pp. 82–88, 2016. View at: Google Scholar
  42. W. Shao-rui, Z. Zhi-xiang, and W. Hai-jun, “Experimental study on the mechanical performance of super long-span self-anchored suspension bridge in construction process,” China Civil Engineering Journal, vol. 47, no. 6, pp. 70–77, 2014. View at: Google Scholar
  43. S. Wang, Z. Zhou, D. Wen et al., “New method for calculating the pre-offsetting value of the saddle on suspension bridges considering the influence of more parameters,” Journal of Bridge Engineering, vol. 21, no. 12, Article ID 06016010, 2016. View at: Publisher Site | Google Scholar
  44. S. Shao, Z. Zhou, G. Deng, P. Du, C. Jian, and Z. Yu, “Experiment of structural geometric morphology monitoring for bridges using holographic visual sensor,” Sensors, vol. 20, no. 4, p. 1187. View at: Publisher Site | Google Scholar

Copyright © 2020 Guojun Deng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views158
Downloads277
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.