Advances in Civil Engineering

Advances in Civil Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 6924976 | 13 pages |

Image-Based Underwater Inspection System for Abrasion of Stilling Basin Slabs of Dam

Academic Editor: Roman Wan-Wendner
Received20 Jun 2019
Revised11 Aug 2019
Accepted24 Aug 2019
Published01 Oct 2019


The abrasion of stilling basin slabs which is caused by waterborne particles is one of the main surface damages in the operation of hydropower station. For determining whether to repair the stilling basin slabs, periodic inspections of erosion condition of stilling basin slabs are required. The practical problem is how to get the underwater image without unwatering and how to analyse the abrasion though the images. This paper developed a novel underwater inspection system named UIS-1 which consists of a customized underwater robot and special quantitative analysis method for this situation. Firstly, the integrated component was designed for the underwater robot that partially removes the siltation and obtains the image of the concrete surface of stilling basin slabs in the desired position. Secondly, the paper proposed an image algorithm to obtain aggregate exposure ratio for quantitative abrasion analysis. This image algorithm used SLIC superpixel and the SVM machine learning method to detect the coarse aggregate exposure automatically. Then, the aggregate exposure ratio was calculated to analyse the degree of abrasion. Finally, the UIS-1 system was evaluated in the field experiments of a dam in Sichuan, China, and its performance was discussed by comparison.

1. Introduction

Stilling basin is one of the most commonly used structures for the energy dissipation in dams. The purpose of this structure is to minimize the scouring effects which can occur downstream of the flow. Not like other hydraulic structures, abrasion erosion which is caused by friction and impact of waterborne debris on the concrete surface is a major damage of stilling basin slabs [1]. There are some traditional underwater methods used for dam erosion detection such as sonar [2, 3] and ground-penetrating radar [4]. But the equipment of these methods is expensive, and the results of detection are not visual intuitive. Meanwhile, a more in-depth assessment of the stilling basin slabs’ concrete surface relies on the visual inspection. The problem to be solved is how to get the images of the stilling basin slabs and analyse the abrasion though the images.

Because it is economically and technically impractical to unwater the stilling basin, the feasible solution to inspect the erosion condition is to use divers and underwater robots. The diver inspection is limited by the depth of the dive, the elevation at which the dive is performed, and the continuous working time. To overcome the limitations of diver inspection, many underwater robots have been developed for dam inspection. The article [5] proposed an autonomous underwater vehicle (AUV) for automatically surveying a dam’s wall while snapping pictures and gathering navigation data in order to build a globally optimized and georeferenced photomosaic to enable systematic inspections. Another AUV system [6] was developed to inspect hydroelectrics which includes vision system to detect and measure cracks. The dam inspection system named Anchor Diver 5.2 was developed in article [7]. This system dropped a remotely operated vehicle (ROV) into water by a hoist system from a boat and implemented an “Water Loupe” visual module to improve the visibility in murky water. Shimono developed an underwater inspection system which consists of an unmanned surface vehicle (USV) and ROV that hangs from USV [8, 9]. The system can obtain position of the inspected point without using any higher cost devices. TriMARES [10] is a hybrid vehicle which can be programmed for AUV or operated as an ROV for the inspection of dam in Brazil. This ROV system which consists of a measuring camera, a cleaning brush, a wall thickness gauge, and a sounding hammer was used to inspect the dam wall surface and the gate equipment [11]. MASKI, which is the latest generation ROV designed by Hydro-Québec, was used to inspect concrete walls, metallic structures, and riverbeds in Quebec [12]. A dedicated ROV with a fixed-distance tracking control strategy was designed for sloping dam wall [13]. The underwater wall-climbing robot [14] was proposed for automatic inspection on the concrete structure surfaces with a novel swirling sucker design. An underwater hybrid robot which consists of a crawler robot and a ROV was developed [15] for surface crack detection. Unfortunately, the different types of underwater robots developed above cannot be directly applied to the inspection of the stilling basin slabs. Because the stilling basin slabs are covered by silt and sand deposits, it is impossible to visualize the slabs by using the underwater camera.

With the development of image-processing techniques, computer vision-based methods have been widely studied in the damages on the surfaces of concrete structures including cracks, spalling, efflorescence, and holes [16]. From the aspect of manual inputs, existing approaches can be divided into rule-based and machine learning-base approaches. Rule-based approaches use predefined features to identify the damages such as edge feature [17], grey-scale histogram [18], fuzzy C-means clustering [19], V-shaped features [20], and local-global features clustering [21, 22]. On the other hand, machine learning-based approaches gained a notable success to detect concrete damages. Inspired by the achievements of deep convolutional neural networks (CNN), research studies introduced this state-of-the-art technique to detect concrete damages immediately. Zhang et al. [23] proposed a six-layer CNN to detect and characterize the road cracks. Cha et al. [24] used a trained CNN to scan a large resolution image by small blocks. Cha et al. [25] also proposed a region-based deep learning for multiple damages in real time. Although the bounding box-based methods can detect concrete damages reasonably well, they do not provide precise information of damage path and density. The fully convolutional network- (FCN-) based method was employed to obtain more precise information for accurate damages detection. Ni et al. [26] proposed a framework for structural crack automatically at a pixel level. Yang et al. [27] and Dung and Anh [28] implemented FCN for semantic segmentation on concrete crack images. Furthermore, Li et al. [16] proposed FCN to detect four concrete damages at pixel level.

For abrasion erosion damage, the continuous mortar wearing results in coarse aggregate exposure. The abrasion damage is directly related to the aggregates exposed on the surface. In the paper [29], the wear of concrete in the stilling basin of the Vrhovo hydropower plants was analysed by the manual measurement diameter of the coarse aggregate. In the paper [30], the aggregate exposure ratio was used as a parameter to represent the abrasion erosion. To the best of authors’ knowledge, there is no research on the abrasion quantitative analysis using automatic aggregate detection from the underwater stilling basin slabs images. Furthermore, due to insufficient sample of the images of stilling basin slabs, the above deep learning-based algorithms cannot apply to aggregate detection.

Therefore, in this study, we develop a new underwater inspection system named UIS-1 for stilling basin slabs inspection. The system was evaluated in the field experiments of a working dam. The contributions of this study are as follows:(1)An underwater inspection robot was customized for inspection of stilling basin. The robot which is operated manually can partially remove siltation and obtain the clear image of the stilling basin slabs concrete surface.(2)We proposed image algorithm used superpixel and the SVM machine learning method to detect the coarse aggregate automatically. Then, aggregate exposure ratio can be calculated to quantitatively analyse the damage of the abrasion.

2. System Design

As shown in Figure 1, the UIS-1 system consists of two parts, underwater inspection robot for image acquisition and the quantitative analysis. The first part is the customized underwater inspection robot. The main function of the robot is to obtain the image of stilling basin slabs without unwatering. In addition to the functions of general observation-level underwater robots, this robot should have the function of removing the silt and sand deposits which covered on the surface of stilling basin slabs. But when the silt is removing, the water will be disturbed, and the image acquisition will be affected. We design an integrated component for removing underwater siltation and observation to overcome this problem.

The second part is the quantitative analysis of the abrasion. The stilling basin slabs are usually composed of a mortar surface layer and a thick concrete layer. Under the action of abrasion, the mortar surface layer begins to wear down gradually. And then, the coarse aggregate gets subsequently exposed. During the whole process of erosion, the aggregate exposure ratio increases gradually. So, we use this ratio as an important indicator of the correlation to quantitative analysis. Firstly, the exposed coarse aggregate is detected at pixel level from the original images captured by the robot. Then, the aggregate exposure ratio can be calculated to quantitatively analyse the abrasion.

3. Underwater Inspection Robot Design

In this research, the underwater robot is designed to operate in 100 m under the water surface. For the ability to move with six degrees of freedom, the underwater robot is equipped with eight thrusters. The transponder is attached on top of the robot. The transponder replies to acoustic signals from the transceiver with own acoustic pulses, allowing the transceiver to calculate the positions of the robot. A pressure vessel is used for waterproof protection of control electronics and sensors. A camera is mounted on the front of the pressure vessel to observe the underwater environment. Battery module provides power for the whole system. The siltation-removing module and image acquisition module are used to partially remove the silt and sand deposits and obtain the image of stilling basin slabs. The buoyancy part is used to provide enough buoyancy for the underwater robot in water. The mass of the robot is approximately 30 kg, and the weight-buoyancy force is negative 10 N for operator in the bottom of the stilling basins. The umbilical cables are used for data transmission with operator software under the water surface. The schematic is shown in Figure 2. The modular structural components are beneficial for system expansion and maintenance.

3.1. Actuators

The underwater robot is composed of a mechanical structure made of aluminium profiles. As shown in Figure 3, the underwater robot is equipped with eight thrusters. Four thrusters are placed vertically at each corner of the frame. And the other four are installed with a 45-degree horizontal angle to the axis at the inner corner of the frame. The robot uses the T200 thruster. The weight of the thruster is 0.35 kg. With the 16 V power input, the maximum forward thrust is 5.1 kgf and the maximum reverse thrust is 4.1 kgf. By controlling the output force of eight thrusters, the underwater vehicle can move arbitrarily with six degrees of freedom.

3.2. Siltation-Removing and Image Acquisition Modules

The underwater robot is equipped with two siltation-removing modules. The two siltation-removing modules are symmetrically distributed at both sides of the image acquisition module. Each siltation -removing module is composed of the draft tube and the thruster. When the module works, the thruster pumps the clear water from the top of the draft tube to the bottom and forms a water jet to remove the silt and sand deposits.

The underwater robot uses an underwater light and an underwater camera to capture the image. The underwater camera is 1920 × 1080 pixels with 2.8 mm focal length. Five LED light sources are used for the underwater light supplement. The special feature of the image acquisition module is that the underwater light and the underwater camera are placed in a transparent container filled with clear water, as shown in Figure 4. The transparent container is cylindrical, with a diameter of 20 cm and a height of 30 cm. The underwater camera and underwater light are mounted in the center of the top of the transparent container.

The siltation-removing module and image acquisition module are the integrated component of underwater robot for removing underwater siltation and observation. Figure 5 shows the schematic diagrams of the operation. In the first step, the robot places the integrated component on the bottom of the stilling basin slabs; the robot can only capture the siltation image due to stilling basin slabs covered by the silt and sand deposits as shown in Figure 5(a). In the second step, the two thrusters of the siltation-removing module produce two water jets, as shown in Figure 5(b). The force of the water jets can be decomposed into vertical and horizontal direction forces. The vertical forces lift the underwater robot to a certain height. The horizontal forces will agitate the deposition and make it suspended. The underwater camera only captures the suspending siltation. In the third step, the thrusters stop working and the underwater robot will drop down naturally. Transparent containers extrude suspending siltation. Although there is still siltation suspending around the transparent container, the underwater camera can capture the image of the concrete surface of the stilling basin slabs, as shown in Figure 5(c). If the siltation layer is thick, the underwater robot can repeat the second and third steps until the concrete layer can be captured by the camera. If the underwater robot stays in the position of the previous step as shown in Figure 5(d), although the suspending siltation settles back to the bottom, the underwater robot can also capture the image of the stilling basin slabs.

The underwater robot must be equipped with the siltation-removing module and image acquisition module simultaneously to achieve image capture results of stilling basin slabs. Assuming there is no siltation-removing module, the robot only captures the image of the siltation layer, not the surface of the concrete. Assuming there is no transparent container, the underwater robot can only get the suspending siltation image as shown in Figures 6(b) and 6(c). And after the siltation-removing module stops working, the slab concrete will be recovered with the falling down siltation without the transparent container, so the underwater robot still cannot capture the image of the stilling basin slabs as shown in Figure 6(d).

3.3. Control Architecture

The control architecture of UIS-1 is expanded from the ArduSub system which is an open-source control solution for remotely operated underwater vehicles. As shown in Figure 7, the control architecture consists of two parts: the part of above-water and the part of the underwater. The two parts communicate with each other by the Ethernet TCP/IP protocol.

The underwater robot motion control algorithm is implemented in the pixhawk controller. The pixhawk controller outputs an eight-channel PWM signal to control the eight ROV thrusters. Also, the pixhawk controller outputs another two-channel PWM signal to control thrusters which is used in the siltation-removing module. The pixhawk controller uses the IIC bus to get the pressure data from pressure sensor. The raspberry Pi communicates with the camera and pixhawk by using two USB ports. The raspberry Pi controller is responsible for exchanging the commands between control computer above-water and the pixhawk controller. Also, the raspberry Pi controller transfers the USB camera video signal to the control computer.

On the above-water part, operation control commands are sent to the control computer by joystick. Two main software are running on the control computer. One is ground control station software; the other is image acquisition software. The main function of the ground control station software is to monitor and control the underwater robot in real time. The image acquisition software is used for real-time observation of siltation-removing state, recording working video, and collecting pictures of stilling basin slabs.

4. Superpixel-Based Aggregate Detection

The degree of the abrasion damage is related to the state of coarse aggregate exposure. So, we use the aggregate exposure ratio to quantitatively analyse the damage of the abrasion. In this article, the underwater image is assumed to be composed of images without distortion. The aggregate exposure ratio can be simplified to the percentage of the coarse aggregate exposure pixels in images:where is the area of coarse aggregate exposure, is the area of the study region, is the pixels of coarse aggregate exposure, and is the pixels of the study region.

In order to automatically detect the coarse aggregate exposure in the underwater image, this article proposed a superpixel-based algorithm. Comparing with the processing algorithm directly on the pixel, the superpixel algorithm is more efficient and accurate, which avoids computation of object edge detections.

First, we label all the images as ground truth by distinguishing the coarse aggregate from the background. Then, we randomly divide the data into training images and test images according to 80% and 20%. As shown in Figure 8, we compute the superpixel for each training image. Then, we convert label image to the pixels of labels, where the pixels of the background are labelled 0 and the pixels of the aggregate are labelled 1. By comparing the superpixel and the pixel labels, if the above 80% pixels of superpixel are labelled as 1, the superpixel is labelled as 1, and the other superpixel is labelled as 0. We compute the feature value for each superpixel. The common feature includes color feature and texture feature. Then, we train the classifiers by the feature values as input and the labels as output. At the end of the classification, we get the trained model.

To verify the effectiveness of the detection algorithm, we compare performance of the detection in the test images as shown in Figure 9. First, we compute the superpixel of the test images. Then, we compute the feature value for each superpixel. We predict each superpixel label based on the feature value. The superpixel which is labelled as 1 is represented as an aggregate superpixel. Then, we get the result of the detection by, respectively, merging the aggregate superpixel and the background superpixel. Then, we can compute the classification result by comparing the result of the detection and the labelling of the test images.

4.1. Superpixel Algorithm

There are many approaches to generate superpixel, each with its own advantages and drawbacks because the SLIC algorithm is faster and more memory efficient than other methods [31]. So, we use the SLIC algorithm to generate the superpixel of the underwater images.

By default, the only parameter of the SLIC algorithm is , the desired number of approximately equally sized superpixel. First, in the initialization step, original pixel images are converted to the CIELAB color space. The initial cluster centers , where , is the pixel color vector and is the pixel coordinate, are sampled on a regular grid spaced pixels apart. In order to avoid centering of a superpixel on an edge and to reduce the change of seeding a superpixel with a noisy pixel, the grid interval centers are moved to seed locations corresponding to the lowest gradient position in a 3 × 3 neighbourhood.

Next, in the assignment step, the search for similar pixels is done in a region around the superpixel center. Each pixel in the image is associated with the nearest cluster center whose search area covers this pixel. After assigning each pixel into the nearest cluster center, cluster centers are next updated by the mean of of all pixels in this cluster. The update process is repeated iteratively until a required number of iterations is reached. The distance between pixel and is simplified to the Eq. . Here, is the color distance, is the spatial distance, and is the importance parameter between the above two distances.

4.2. Superpixel Classification

The classifier predicts the label of each superpixel based on the color feature and texture feature. In this study, the support vector machine (SVM) is used to research the classifier based on the training data set.

SVM was firstly proposed by Cortes and Vapnik in [32]. Support vector machines are an effective technique for solving classification and regression problems. SVM selects the hyperplane so that it maximizes its distance to the nearest data points in either class. This is referred to as margin maximization. So far, SVM is particularly effective on data sets that are linearly separable. However, for many real-life data sets, such a hyperplane may not exist. In these cases, SVM uses a function to map the data into a high-dimensional space where such separability is then possible. This function is called a kernel function. The radial basis function kernel, or RBF kernel, is a popular kernel function used in various kernel functions. The RBF kernel on two samples, and , represented as feature vectors in some input space, is defined as , where is a free parameter.

4.3. Performance Assessment

For tuning the parameter of the superpixel detection, the accuracy metrics are applied, defined inwhere TP is the number of true positives superpixel; FN is the number of false negatives superpixel; FP is the number of false positives superpixel; and TN is the number of true negatives superpixel.

In order to verify the effectiveness of the method in the test data set, we use the absolute difference of the aggregate exposure ratio () between the automatic detected and the ground truth:where is the aggregate exposure ratio of the automatic detected. and is the aggregate exposure ratio of ground truth.

5. Experiment in the Field

The performance of the UIS-1 system was evaluated in field experiments of a working dam on 23 January 2019. This section discusses the results of the application in an actual environment.

5.1. Dam of the Experiment

The dam of the experiment is located on the mainstream of Jialing River in Sichuan Province in China, as shown in Figure 10. This hydropower project was put into operation in May 2014. This dam has two stilling basins. As shown in Figure 11, the experiment was carried out in one of the stilling basins which is 135 m long and 143 m wide. The water depth is 18 m in dry season.

5.2. Underwater Image Acquisition

The prototype of the underwater inspection robot used in the experiment is shown in Figure 12. Because of the large size of the stilling basin, we choose the sampling method to obtain the underwater images. The stilling basin is meshed into 64 sampling points. All the sampling points are located at the center of the spillway central axis and separated longitudinally by 20 meters as shown in Figure 13. The sampling points are named , , where is the horizontal serial number and is the vertical serial number.

The process of the underwater image acquisition began with placing the acoustic positioning transceiver on the surface of the stilling basin. Then, the operator boat carrying the underwater robot enters the stilling basin area. At the end of the preparatory work, the underwater robot was put into the water. Operators conduct underwater acquisition through three software observations: the acoustics positing software, the ground control station software, and the image acquisition software as shown in Figure 14. The operator used the acoustics positing software to locate the underwater robot and remotely control the underwater robot to reach at the test point. The orientation, water depth, and environment image can be obtained through the ground control station software. When the robot reached the sample point, the image acquisition software was used to obtain the image of the stilling basin.

At each sample point, the operator firstly observed was the siltation on the surface of the stilling basin, as shown in Figure 15(a). Then, the operator opened the two thrusters of the siltation-removing module, and the siltation was gradually washed away as shown in Figure 15(b). When the siltation was completely removed, the clear image of the surface can be obtained as shown in Figure 15(c).

Through the above workflow, the operator completed the 64-sample points collection in turn. Two images were acquired at each sample point. A total of 128 images of stilling basin slabs were obtained.

5.3. Aggregate Detection

The image acquisition software saves 1920 × 1080 pixels format images. In order to preserve only the effective concrete surface in the image, each image was clipped to the 1000 × 820 pixels. The 128 images are separated into 103 images as training data and 25 images as test data in a ratio close to 4 : 1. The exposed coarse aggregate was manually labelled using the LabelMe tool. The manual labelling results is used as the ground truth.

In order to determine the parameters of size, the different sizes of superpixel were generated for typical images, such as 300, 500, and 800, as shown in Figure 16. Under these parameters, the average pixels area of the superpixel is 2733, 1640, and 1025. Considering the computational complexity and average pixels area, we choose the 500 size of superpixel for each image. The parameter m of the SLIC algorithm is selected as 10, giving more weight to space distance.

There are 6437 superpixel labelled as “1” and 45047 superpixel labelled as “0” in the training data set. Also, there are 1635 superpixel labelled as “1” and 10832 superpixel labelled as “0” in the test data set. The training was performed on a computer with Intel Xeon E5-2650 @ 2.2 GHz CPU and 62 GB memory. The implementation of training is based on the Ubuntu 16.04 and Python 3.6. In the SVM training, and a capacity factor “” need to be tuned. The configurations of parameters are exhaustively searched in and . The configuration leading to the smallest 10-fold cross-validation error is selected.

We choose different types of color features and texture features for training to get the best accuracy. For the color feature, we select RGB (red green blue) and HSV (hue saturation value) for training. For the texture feature, we select HOG (histogram of oriented gradient) [33], LBP (local binary pattern) [34], and GLCM (grey-level cooccurrence matrix) [35] for training. Comparing the training results in Table 1, the HSV color feature and LBP texture feature have better accuracy than others. So, we combined HSV color feature and LBP texture feature for superpixel classification. Finally, the accuracy of classification is 91.22%.

FeatureAccuracy (%)

RGB color feature84.62
HSV color feature85.47
HOG texture feature77.90
LBP texture feature80.93
GLCM texture feature75.71
HSV color + LBP texture feature91.22

5.4. Aggregate Exposure Ratio

According to the label of the superpixel predicted by the trained model, the detected aggregate exposure ratio can be calculated. The absolute difference of the ratio between the predicted and the ground truth in test images is calculated, as shown in Figure 17. It can be known from the calculation results that the absolute difference of the ratio is all below 7%.

5.5. Comparative Study

For traditional designed underwater robots in the introduction part, there is no design for the function of removing the silt and sand deposits. The traditional underwater robots cannot get the clear image of the stilling basin slabs concrete surface. Therefore, the underwater robot proposed in this paper is more suitable for the inspect task of the stilling basin slabs.

For analysing the advantage of the proposed detection method, we select the linear spectral clustering superpixel and naive Bayes machine learning method to discuss. Linear spectral clustering (LSC) is an another superpixel segmentation firstly proposed by Li and Chen [36]. In LSC, each pixel in CLIELAB color space is transformed to a ten-dimensional vector in the feature space. k seeds are sampled over the image uniformly at fixed horizontal and vertical intervals. After moving each seed to its lowest gradient neighbour in 3 × 3 neighbourhood, these seeds are used as search centers and their feature vectors are used as initial weighted means of the corresponding cluster. Then, the pixel assignment process and cluster updating process are repeated until convergence. Pixels in the same cluster form a superpixel. Naive Bayes (NB) is one of the most efficient and effective inductive learning algorithms for machine learning [37]. NB algorithm is a supervised classification method based on the Bayesian theorem with the assumption of conditional independence between features.

Here are the three algorithms that we used to compare for detection algorithm proposed in this paper: (1) SLIC superpixel and NB machine learning method; (2) LSC superpixel and SVM machine learning method; (3) LSC superpixel and NB machine learning method. The three comparison algorithms also use the same dataset and the same feature (HSV color feature and LBP texture feature) to train the model. The number of superpixel is selected to 500 size. The best parameters of SVM algorithm is chosen by tuning. The likelihood of the features is assumed to be Gaussian distribution in naive Bayes algorithm.

Through training, we obtained models of different algorithms. The accuracy of different models is shown in Table 2. The accuracy of the method proposed in this paper is higher than the other three algorithms. We used the trained model to calculate the aggregate exposure ratio of the test data set which is never used in the training process. The results of 25 test images are shown in Table 3. The minimum, maximum, and average values of absolute difference between the predicted ratio and the ground truth ratio () in test images are summarized in Table 2.

PerformanceProposed method (%)SLIC and NB (%)LSC and SVM (%)LSC and NB (%)

Minimum 0.011.410.120.82
Maximum 6.7616.598.5912.81
Average 3.345.863.905.47

No. (%) of proposed method (%) of SLIC and NB (%) of LSC and SVM (%) of LSC and NB (%)


The five typical examples of the predicted result are shown in Figure 18. Figures 18(a)–18(e) are corresponding to the No. 3, No. 7, No. 10, No. 17, and No. 24 in Table 3. We can see that the worse abrasion erosion, the bigger aggregate exposure ratio. So, the predicted aggregate exposure ratio can be used to quantitatively analyse the damage of the abrasion. From the comparison of performance in Table 2 and the typical predicted images in Figure 18, the algorithm proposed in this paper is more advanced for the coarse aggregate detection of stilling basin slabs.

6. Conclusions

This paper proposes a novel underwater inspection system named UIS-1 for abrasion of the stilling basin slabs of dam. With the integrated component for removing siltation and observation, the underwater inspection system can remove underwater siltation and observe the surface of stilling basin slabs. An image-processing method based on SLIC superpixel and SVM is designed to automatically separate coarse aggregates from the background on the underwater images. The ratio of the coarse aggregate exposure pixels in each image is calculated to represent the degree of aggregate exposure. Then, the aggregate exposure ratio can be used to quantitatively analyse the damage of the abrasion. In the field experiments of a dam in Sichuan, China, the proposed system was evaluated. The underwater robot obtains 128 images from 64 sampling points. Different color and texture features are compared for getting best accuracy. Through the analysis and comparison of the results, both the accuracy and absolute difference of the ratio are superior to other three algorithms.

Although the UIS-1 system has been used in the actual dam’s stilling basin, there are still many aspects that can be improved in the future: (1) The first thing that needs to be improved is the degree of automation of the system. At present, the system mainly relies on manual operation to control and acquire images. The proficiency of the operator affects the execution efficiency of the system. In the future, the operation of underwater robots can be more automated, reducing manual operations and improving work efficiency. (2) Secondly, as the work efficiency increases, the number of images acquired by the system in the same operating time will be increased. So, a better algorithm such as deep learning can be used to increase the pixel accuracy. (3) In the future, underwater robots can also integrate more types of sensors to obtain more multidimensional data and better quantitative analysis of abrasion.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


This work was supported by the Sichuan Science and Technology Program (2018JZ0001, 2018GZDZX0043, 2019YFG0143, 2019YFG0144, and 2019YFS0057) and the Science and Technology Project of China Datang Corporation (CDT-TZK/SYD[2018]-010).


  1. Y.-W. Liu, T. Yen, and T.-H. Hsu, “Abrasion erosion of concrete by water-borne sand,” Cement and Concrete Research, vol. 36, no. 10, pp. 1814–1820, 2006. View at: Publisher Site | Google Scholar
  2. P. Shi, X. Fan, J. Ni, Z. Khan, and M. Li, “A novel underwater dam crack detection and classification approach based on sonar images,” PLoS One, vol. 12, no. 6, p. e0179627, 2017. View at: Publisher Site | Google Scholar
  3. W. Kazmi, P. Ridao, D. Ribas, and E. Hernández, “Dam wall detection and tracking using a mechanically scanned imaging sonar,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 3595–3600, Kobe, Japan, May 2009. View at: Publisher Site | Google Scholar
  4. X. Xu, J. Wu, J. Shen, and Z. He, “Case study: application of GPR to detection of hidden dangers to underwater hydraulic structures,” Journal of Hydraulic Engineering, vol. 132, no. 1, pp. 12–20, 2005. View at: Publisher Site | Google Scholar
  5. P. Ridao, M. Carreras, D. Ribas, and R. Garcia, “Visual inspection of hydroelectric dams using an autonomous underwater vehicle,” Journal of Field Robotics, vol. 27, no. 6, pp. 759–778, 2010. View at: Publisher Site | Google Scholar
  6. E. CavalcantiNeto, R. M. Cavalcante, T. Varela et al., “Autonomous underwater vehicle to inspect hydroelectric dams,” International Journal of Computer Applications, vol. 101, no. 11, pp. 1–11, 2014. View at: Publisher Site | Google Scholar
  7. Y. Yang, S. Hirose, P. Debenest, M. Guarnieri, N. Izumi, and K. Suzumori, “Development of a stable localized visual inspection system for underwater structures,” Advanced Robotics, vol. 30, no. 21, pp. 1415–1429, 2016. View at: Publisher Site | Google Scholar
  8. S. Shimono, “Evaluation of under water positioning by hanged ROV from USV,” International Journal of Modeling and Optimization, vol. 7, no. 4, pp. 224–230, 2017. View at: Publisher Site | Google Scholar
  9. S. Shimono, S. Toyama, and U. Nishizawa, “Development of underwater inspection system for dam inspection: results of field tests,” in Proceedings of the OCEANS 2016 MTS/IEEE Monterey, pp. 1–4, Monterey, CA, USA, September 2016. View at: Publisher Site | Google Scholar
  10. N. A. Cruz, A. C. Matos, R. M. Almeida, B. M. Ferreira, and N. Abreu, “TriMARES-a hybrid AUV/ROV for dam inspection,” in Proceedings of the OCEANS’11 MTS/IEEE KONA, pp. 1–7, Waikoloa, HI, USA, September 2011. View at: Publisher Site | Google Scholar
  11. H. Sugimoto, Y. Moriya, and T. Ogasawara, “Underwater survey system of dam embankment by remotely operated vehicle,” in Proceedings of the 2017 IEEE Underwater Technology (UT), pp. 1–6, Busan, South Korea, February 2017. View at: Publisher Site | Google Scholar
  12. L. Provencher and S. Sarraillon, “The MASKI+ underwater inspection robot: a new generation ahead,” in Proceedings of the 2016 4th International Conference on Applied Robotics for the Power Industry (CARPI), pp. 1–6, Jinan, China, October 2016. View at: Publisher Site | Google Scholar
  13. C. Yu, X. Xiang, J. Zhang, R. Zhao, and C. Zhou, “Complete coverage tracking and inspection for sloping dam wall by remotely operated vehicles,” in Proceedings of the OCEANS 2017, pp. 1–5, Anchorage, UK, September 2017. View at: Google Scholar
  14. X. Liu, R. Chen, Z. Xue, Y. Lei, and J. Tian, “Design and optimization of a novel swirling sucker for underwater wall-climbing robots,” in Proceedings of the 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), pp. 1000–1005, Munich, Germany, August 2018. View at: Publisher Site | Google Scholar
  15. P. Kohut, M. Giergiel, P. Cieslak, M. Ciszewski, and T. Buratowski, “Underwater robotic system for reservoir maintenance,” Journal of Vibroengineering, vol. 18, no. 6, pp. 3757–3767, 2016. View at: Publisher Site | Google Scholar
  16. S. Li, X. Zhao, and G. Zhou, “Automatic pixel-level multiple damage detection of concrete structure using fully convolutional network,” Computer-Aided Civil and Infrastructure Engineering, vol. 34, no. 7, pp. 616–634, 2019. View at: Publisher Site | Google Scholar
  17. I. Abdel-Qader, O. Abudayyeh, and M. E. Kelly, “Analysis of edge-detection techniques for crack identification in bridges,” Journal of Computing in Civil Engineering, vol. 17, no. 4, pp. 255–263, 2003. View at: Publisher Site | Google Scholar
  18. T. H. Dinh, Q. P. Ha, and H. M. La, “Computer vision-based method for concrete crack detection,” in Proceedings of the 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), vol. 2016, pp. 1–6, Phuket, Thailand, November 2016. View at: Publisher Site | Google Scholar
  19. Y. Noh, D. Koo, Y.-M. Kang, D. Park, and D. Lee, “Automatic crack detection on concrete images using segmentation via fuzzy C-means clustering,” in Proceedings of the 2017 International Conference on Applied System Innovation (ICASI), pp. 877–880, Sapporo, Japan, May 2017. View at: Publisher Site | Google Scholar
  20. Y. Sato, Y. Bao, and Y. Koya, “Crack detection on concrete surfaces using V-shaped features,” The World of Computer Science and Information Technology Journal, vol. 8, no. 1, pp. 1–6, 2018. View at: Google Scholar
  21. P. Shi, X. Fan, J. Ni, and G. Wang, “A detection and classification approach for underwater dam cracks,” Structural Health Monitoring: An International Journal, vol. 15, no. 5, pp. 541–554, 2016. View at: Publisher Site | Google Scholar
  22. X. Fan, J. Wu, P. Shi, X. Zhang, and Y. Xie, “A novel automatic dam crack detection algorithm based on local-global clustering,” Multimedia Tools and Applications, vol. 77, no. 20, pp. 26581–26599, 2018. View at: Publisher Site | Google Scholar
  23. L. Zhang, F. Yang, Y. Daniel Zhang, and Y. J. Zhu, “Road crack detection using deep convolutional neural network,” in Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), pp. 3708–3712, Taipei, Taiwan, September 2016. View at: Publisher Site | Google Scholar
  24. Y.-J. Cha, W. Choi, and O. Büyüköztürk, “Deep learning-based crack damage detection using convolutional neural networks,” Computer-Aided Civil and Infrastructure Engineering, vol. 32, no. 5, pp. 361–378, 2017. View at: Publisher Site | Google Scholar
  25. Y.-J. Cha, W. Choi, G. Suh, S. Mahmoudkhani, and O. Büyüköztürk, “Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types,” Computer-Aided Civil and Infrastructure Engineering, vol. 33, no. 9, pp. 731–747, 2018. View at: Publisher Site | Google Scholar
  26. F. T. Ni, J. Zhang, and Z. Q. Chen, “Pixel-level crack delineation in images with convolutional feature fusion,” Structural Control and Health Monitoring, vol. 26, no. 1, pp. 1–18, 2019. View at: Publisher Site | Google Scholar
  27. X. Yang, H. Li, Y. Yu, X. Luo, T. Huang, and X. Yang, “Automatic pixel-level crack detection and measurement using fully convolutional network,” Computer-Aided Civil and Infrastructure Engineering, vol. 33, no. 12, pp. 1090–1109, 2018. View at: Publisher Site | Google Scholar
  28. C. V. Dung and L. D. Anh, “Autonomous concrete crack detection using deep fully convolutional neural network,” Automation in Construction, vol. 99, pp. 52–58, 2019. View at: Publisher Site | Google Scholar
  29. A. Kryžanowski, M. Mikoš, J. Šušteršiè, V. Ukrainczyk, and I. Planinc, “Testing of concrete abrasion resistance in hydraulic structures on the Lower Sava River,” Strojniski Vestnik/Journal of Mechanical Engineering, vol. 58, no. 4, pp. 245–254, 2012. View at: Google Scholar
  30. S. Choi and J. E. Bolander, “A topology measurement method examining hydraulic abrasion of high workability concrete,” KSCE Journal of Civil Engineering, vol. 16, no. 5, pp. 771–778, 2012. View at: Publisher Site | Google Scholar
  31. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274–2282, 2012. View at: Publisher Site | Google Scholar
  32. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at: Publisher Site | Google Scholar
  33. N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, no. 16, pp. 886–893, San Diego, CA, USA, June 2005. View at: Google Scholar
  34. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. View at: Publisher Site | Google Scholar
  35. R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-3, no. 6, pp. 610–621, 1973. View at: Publisher Site | Google Scholar
  36. Z. Li and J. Chen, “Superpixel segmentation using linear spectral clustering,” in Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 4, pp. 1356–1363, Boston, MA, USA, June 2015. View at: Google Scholar
  37. C. K. I. Williams and D. Barber, “Bayesian classification with Gaussian processes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1342–1351, 1998. View at: Publisher Site | Google Scholar

Copyright © 2019 Yonglong Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

630 Views | 368 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.