Table of Contents Author Guidelines Submit a Manuscript
International Journal of Aerospace Engineering
Volume 2019, Article ID 5137139, 14 pages
https://doi.org/10.1155/2019/5137139
Research Article

Visual Inspection of the Aircraft Surface Using a Teleoperated Reconfigurable Climbing Robot and Enhanced Deep Learning Technique

1Singapore University of Technology and Design, Singapore 487372
2Department of Engineering and Technology, Universidad de Occidente, Campus Los Mochis, 81223, Mexico
3Department of Computer Science, Birla Institute of Technology and Science (BITS) Pilani, Pilani Campus, 333031, Vidyavihar, Rajasthan, India
4Department of Electrical Engineering, UET Lahore, NWL Campus 54890, Pakistan
5ST Engineering Aerospace, ST Engineering, Singapore 539938

Correspondence should be addressed to Mohan Rajesh Elara; gs.ude.dtus@aralehsejar

Received 30 January 2019; Revised 30 May 2019; Accepted 23 July 2019; Published 12 September 2019

Academic Editor: Antonio Concilio

Copyright © 2019 Balakrishnan Ramalingam et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Aircraft surface inspection includes detecting surface defects caused by corrosion and cracks and stains from the oil spill, grease, dirt sediments, etc. In the conventional aircraft surface inspection process, human visual inspection is performed which is time-consuming and inefficient whereas robots with onboard vision systems can inspect the aircraft skin safely, quickly, and accurately. This work proposes an aircraft surface defect and stain detection model using a reconfigurable climbing robot and an enhanced deep learning algorithm. A reconfigurable, teleoperated robot, named as “Kiropter,” is designed to capture the aircraft surface images with an onboard RGB camera. An enhanced SSD MobileNet framework is proposed for stain and defect detection from these images. A Self-filtering-based periodic pattern detection filter has been included in the SSD MobileNet deep learning framework to achieve the enhanced detection of the stains and defects on the aircraft skin images. The model has been tested with real aircraft surface images acquired from a Boeing 737 and a compact aircraft’s surface using the teleoperated robot. The experimental results prove that the enhanced SSD MobileNet framework achieves improved detection accuracy of aircraft surface defects and stains as compared to the conventional models.

1. Introduction

Aircraft skin inspection is essential under the Corrosion Prevention and Control Program (CPCP) to ensure the aircraft structural integrity [1]. Under CPCP, the aircraft should be kept thoroughly clean of deposits containing contaminating substances such as oil, grease, dirt, and other organic or foreign materials to prevent the aircraft from the potential risk of corrosion, degradation of seals, and plastic components. Furthermore, after scheduled cleaning of the aircraft, it should be thoroughly inspected to identify uncleaned areas (typically stains) and surface defects. During cleaning, the cleaning agent may build upon these defects, which can increase the damage caused [1, 2].

Human visual inspection is, by far, the most widely used method in aircraft surface inspection [3, 4] as per CPCP. However, the current practice of getting on the aircraft body in order to carry out surface inspection raises safety issues for the inspectors. It is also time-consuming and suffers at times from being ineffective due to inspector fatigue or boredom. Automated aircraft skin inspection systems based on computer vision techniques could allow the inspector to safely, quickly, and accurately perform the necessary visual inspection [3, 5, 6]. Robotic assistance for inspection of the aircraft skin has been investigated in [4, 79]. These systems need a flexible teleoperated robotic platform with different locomotion capabilities to access the aircraft surface and an optimal detection algorithm for automatically detecting stains and defects on the aircraft skin.

Designing a robotic inspection platform with good adherence, mobility, and flexibility is a key challenge. Typically, fixed morphology climbing robots are used for aircraft inspection. They use magnetic devices, vacuum suction cups, or propeller force to adhere and climb the aircraft surface [8, 1012]. However, these climbing robots face difficulties when accessing confined areas due to their less flexible design [8, 9, 13]. They also find it hard to climb overlapped joints and fuselage, thereby reducing their coverage [9]. Reconfigurable robotic platforms can access confined spaces, rough terrains, and hard to navigate surfaces through dynamically changing their shape and functionality. Recently, the reconfigurable robotic platform has been widely developed and deployed in the various applications including inspection [14, 15], repairs [16], cleaning [17, 18], and space applications [19]. Kwon et al. [20] developed a reconfigurable robot for inspection of in-house pipelines and sewage pipeline inspection. In [14], authors have developed a reconfigurable robot for bridge inspection. Tromino and tetrominoes tiling algorithm-based reconfigurable shape-changing robots have also been developed for floor cleaning [21, 22]. The floor cleaning robot achieve better floor area coverage than fixed morphology robots.

Another constraint for the aircraft visual inspection technique is developing detection algorithm to recognize the stains and defects automatically. In the last decade, various visual inspection algorithms have been applied to the field of aircraft inspection and surface defect detection. These visual inspection algorithms are classified into two types, such as traditional image processing-based visual inspection [5, 2325] and machine learning techniques [2631]. Typically, the traditional algorithms use rudimentary characteristics like edges, brightness, histogram, and spectral feature to detect and segment defects [28]. However, these image processing methods work well only in controlled environments and often fail in complex real-world scenarios due to noise and complex backgrounds. Moreover, for various defects, thresholds often used in these algorithms need to be adjusted or it may even be necessary to redesign the algorithms [28]. CNN-based algorithms have been successfully implemented in defect detection and inspection applications including surface crack and defect detection [2628, 30, 32], solar panel inspection [33], and cleaning inspection [34]. Cha and Choi proposed the use of CNNs [32] and Faster RCNN method [29] for better crack detection in concrete and metal surfaces. The author suggests the use of Faster RCNN, which has a more optimized bounding box for crack localization. Further, the author also proposes an UAV system for tagging and localization of concrete cracks [35, 36].

Typically, the key challenge of deep learning algorithms for this application is the requirement of a large amount of image data and an optimal preprocessing algorithm. Preprocessing plays a vital role in helping the network in recognizing the low-contrast objects and differentiating between objects with similar features, such as dirt, stains, and scratches on the aircraft surface, all at a small cost, which is negligible compared to increasing the complexity of the CNN architecture [26]. Li et al. included the preprocessing algorithm with SSD MobileNet in surface defect detection application. Here, the authors use Gaussian filtering, edge detection, and circle Hough detection algorithms to enhance the edge quality and filter out the noise from the image [26]. In [37], edge information for improving the Faster RCNN detection accuracy in traffic sign detection application was used. The author proved that the edge information has improved the detection accuracy and recall rate of Faster RCNN. Brendel and Bethge developed a CNN model named as BagNets which uses a local image feature to achieve a better detection ratio and good feature sensitivity [38]. In [28], Tao et al. prove that their compact CNNs have achieve better detection in the texture-based trained network than the content image.

In order to overcome the shortcomings mentioned earlier, this paper proposes a reconfigurable suction-based robot named as “Kiropter” for aircraft inspection along with an enhanced SSD MobileNet deep learning framework for recognizing and classifying the stains and defects on the aircraft surface. The reconfigurable robot is capable of accessing confined areas, overlapped joints, and fuselage on the aircraft body by dynamically changing its shape and functionality. The self-filtering-based periodic pattern detection filter is adopted as with the SSD MobileNet deep learning framework to effectively enhance the recognition results in low-contrast stain and defective areas in the aircraft skin image. This article is organized as follows: related work is reported in Section 2. Section 3 describes the robotic architecture and functionality. The enhanced deep learning-based inspection model is described in Section 4. The experimental results are given in Section 5. Finally, conclusions and future work are provided in Section 6.

2. Related Research Work

Very few works on aircraft skin inspection exist in the literature. Few of these works focus on developing a robotic platform for inspection and others focused on the detection algorithm. Siegel and Gunatilake [39] developed an aircraft surface inspection model based on the crown inspection mobile platform (CIMP) which captures images of the aircraft body and employs a computer-aided visual inspection algorithm for recognizing defects on the surface. The visual algorithm comprises of the wavelet-based image enhancement scheme to highlight the cracks and three-layered feedforward neural network consisting of ten inputs, thirty hidden layers, and two outputs to classify the cracks and corrosion. Enhanced remote visual inspection of the aircraft skin based on the CIMP robot system using an improved NN algorithm is proposed by Alberts et al. [40]. Here, the author designs a three-layer network (input, hidden, and output layers) with 169 input node-based NN for differentiating healthy and cracked areas of the aircraft surface. Automated aircraft visual inspection is reported by Rice et al. [24], where the authors use the depth camera fitted on the roof to scan the aircraft surface. This is followed by a contour fitting algorithm to visualize the defects present on the aircraft. Mumtaz et al. [23] examined three image processing algorithms to differentiate between cracks and scratches on the aircraft skin. The authors tested the neural network, contourlet transform (CT) with a dot product classifier (DPC), and discrete cosine transform (DCT) with DPC and the combination of DCT and CT with DPC. Among the three DCT, CT with DPC combined schemes achieves a higher recognition rate. Jovaňcevíc et al. [5] developed an automated aircraft exterior inspection model for the autonomous mobile collaborative robot (cobot) using 2D image processing methods. The authors use a kernel mask to remove the noise and enhance the texture on aircraft surface regions. The Hough transform (HT) and edge detection circle (EDC) algorithms are used to extract the geometrical shapes of objects on the aircraft surface (oxygen bay, radome latch, air inlet vent, engine, and pitot probe), and a CAD model is used to inspect the state change. Shang et al. [41] developed a climbing robot for aircraft wing and fuselage inspection. The developed model was designed for accomplishing the various nondestructive testing including eddy current inspection for surface cracks and subsurface corrosion and thermographic inspection to detect loose rivets. In [9], a snake-arm robot system is developed for aircraft Remote Access Nondestructive Evaluation (RANDE). The robot can reach into tight spaces in aircraft wings and perform crack and defect inspection. Aircraft inspection with the reconfigurable robot and enhanced deep learning scheme is a novel approach and has lots of potential in research.

3. Proposed Method

The functional block diagram of the proposed visual inspection model is shown in Figure 1. It comprises of an inspection robot (Kiropter) and the enhanced inspection algorithm. This section first describes the structure and hardware of the robot followed by the algorithm used for stain and defect detection.

Figure 1: Proposed scheme.
3.1. General Overview

The Kiropter is a semiautonomous teleoperated differential 8W 2D robotic platform as shown in Figure 1. This mobile robot is capable of navigating around the surface of the aircraft by utilizing its reconfigurable nature. It is built on polylactic acid material (PLA) for the structure, acrylic of 5 mm for the base, and thermoplastic polyurethane (TPU) for the electric ducted fan (EDF) holders. The platform is 450 mm long and 200 mm wide and approximately 2.21 kg in total weight. The wheels are made up of soft and high-friction rubber for increasing of the traction on the airplane surface during locomotion. The robot is powered with 11.1 V, 42000 mAh 45 cc LIPO batteries parallelly connected to the units.

3.2. Hardware Architecture

Figure 2 shows the hardware architecture of the Kiropter robot. It comprises of four functional units including the central control unit (CCU), locomotive system, EDF system, and vision system. The robot communicates wirelessly and can be controlled through a graphical user interface (GUI). The functional description of each unit is described as follows.

Figure 2: Hardware components and communication networks.
3.3. Central Control Unit

The central control unit (CCU) is powered with an Arduino Mega 2560 microcontroller. It handles the wireless communication interface and generates the required control signals to the locomotion unit and shape changing unit according to the control flow chart as shown in Figure 3.

Figure 3: Control of the Kiropter flow chart, for three simultaneous commands.
3.4. Locomotive System

Locomotion of the Kiropter robot has been achieved through three functions including adhesion, rolling, and transformation. Electric turbines are used for adhesion, and servomotors are used for navigation around the surface of the aircraft and for transformation. Electric turbines are controlled from the CCU through the HV75A brushless motor controller. The CCU generates the required PWM signal to the driving unit (HV75A) for adjustment of the turbine speed. For transformation, the servomotor is placed in the central articulation of the robot. It has a torque of 1.5 Nm and is controlled asynchronously through the CCU. Through servomotors, the robot can reconfigure its shape to different angles: 0°, 90° (on orthogonal surfaces), 45°, −5°, and −7° (Figure 4). This enables it to move on difficult surfaces for traversal, specifically between the aircraft wing and the body through the curvature of the aircraft surface. In that way, the robot keeps its wheels in contact with the surface, which is critical for the stability of the platform. The position of the robot can be estimated considering the feedback of the angular position of the servomotors of the wheels and the angular position of the central joint. In addition, a wheel encoder and two inertial measurement unit (IMU) sensors are fixed in the front and rear ends of the robot to estimate the robot position and orientation on the airplane structure. Through wheel encoders and IMU sensor data, the stain position and orientation information has been estimated in the inspection stage.

Figure 4: Configurations of the Kiropter for different situations: (a) flat, (b) 90° for the transition wing/body, (c) 45° for transition and navigation, (d) 6° for navigation on the 767 and 787 aircrafts, and (e) 9° for navigation on the 737 aircraft.
3.5. Electric Ducted Fan (EDF) System

An EDF is an impeller driven by a brushless motor mounted inside a circular duct. It has a thrust holding 3.5 kg. The EDF receives its power directly from the batteries. The speed of the EDF and the energy used are controlled by changing the pulses from the CCU independently for each EDF using the electronic speed control (ESC) (Figure 5). Through the EDF system, the robot is able to hold on and inspect the curvature and lower surface of the airplanes.

Figure 5: EDF connections to the energy and control unit.
3.6. Vision System

The vision system consists of a WiFi-enabled HD 1080 p camera (HDDB 10AD). The camera is placed 72 mm above the surface of the plane, in the center of the body of the robot. The camera is inclined at an angle of 30° from the - plane of the robot (Figures 6 and 7), so it is estimated that the center of the camera at the height of 72 mm has a range of ≈460 mm on flat surfaces. The opening angle of the camera is 60°. So, the point of vision starts at 23 mm from the robot in the - plane.

Figure 6: Position of the camera in the Kiropter: front and side views.
Figure 7: Angle of the view of the camera with respect to the inertial system of the robot.

4. Enhanced Deep Learning Framework

This section describes the vision-based aircraft surface stain and defect detection based on the enhanced deep learning technique as shown in Figure 8. The framework has two phases which are preprocessing and detection.

Figure 8: Enhanced deep learning scheme.
4.1. Image Preprocessing

Generally, the backgrounds present in training and test image data can affect the learning and recognition abilities of all detection algorithms [26, 28, 38]. Through preprocessing, the effect of these backgrounds can be reduced, which can enhance the recognition accuracy at a relatively small cost. Preprocessing can also help enhance weak edges and features which the model may otherwise find difficult to learn. In view of these, a self-filtering-based periodic pattern detection filter and Sobel edge detection technique are adopted. The periodic pattern detection filter uses the self-filtering technique to suppress unwanted background patterns present in these images. This is based on the property that in the spatial domain, a periodic pattern on the captured image will have distinct peaks in the frequency domain. The self-filtering techniques automatically change the filter function according to the background of the image. It computes the appropriate filter function through computing the magnitude of the Fourier transformed image. After this, a Sobel edge detector is used to enhance the weak edges present in the image. The algorithm for the preprocessing stage is described in Algorithm 1.

Algorithm 1. Preprocessing algorithm.
Data: grayscale image (pixel coordinates )
Result:
Step 1: transform the image to a frequency domain (coordinates ) using 2D FFT then apply the FFT shift Step 2: apply the log absolute function on the Fourier transformed source image to generate the amplitude image Step 3: compute the self-filtering function from the amplitude image Step 4: suppress the periodic patterns in the frequency image using self-filtering function Step 5: transform the filtered image to the image space using inverse Fourier transform Step 6: edge enhancement has been performed in this step. Due to the strong filtering effect on the prior stage, the edges of the stains and defects are slightly blurred which may affect the detection accuracy of the algorithm. Hence, the Sobel filter is adopted to enhance the edges of the defect and stain regions on the periodic pattern suppressed image.

4.2. SSD MobileNet Detection Network

SSD MobileNet is an object detection framework which is trained to detect and classify the defects and stains on the captured image. Here, MobileNet v2 is the base network utilized to extract the high-level feature from the images for classification and detection and SSD is a detection model which uses MobileNet feature map outputs and convolution layers of different sizes to classify and detect bounding boxes through regression. Connection of MobileNet and SSD is shown in Figure 9.

Figure 9: SSD MobileNet.
4.2.1. MobileNet V2 Architecture

MobileNet v2 [42] is a lightweight feature extractor that can be widely used for real-time detection applications and low-power-embedded systems. Figure 10 shows the functional block diagram of the MobileNet v2 architecture. MobileNet v2 uses residual units with bottleneck architecture for convolution module connection. The MobileNet v2 architecture comprises of three convolution layers including a expansion layer, depth-wise convolution layer, and projection layer. Here, the expansion layer is employed to expand (default expansion factor is 6) the number of the channel in the data before going to depth-wise convolution, the next layer is a depth-wise convolution layer which filters the input, and the last layer is a projection layer which reduces the number of the channel by a factor of 6. It pretty much does the opposite of the expansion layer. Habitually, each convolution layer has batch normalization function and ReLU6 activation function. However, the output of the projection layer does not have an activation function applied to it.

Figure 10: MobileNet.
4.2.2. SSD Architecture

SSD [43] is an object localizer which uses the output (feature map) from the final few layers of the feature extractor (MobileNet v2) to predict the location of different objects. SSD passes the input feature map through a series of convolution layers of varying sizes, which decrease in scale through the network. These layers allow it to detect objects of varying sizes. Each convolution layer also produces a fixed set of predictions, which contributes to a lower computation cost. Also, unlike traditional localizers, SSD chooses the bounding box for each prediction from a fixed pool of sizes, which drastically reduces the computation time. This enables the SSD architecture to be used in real-time detection systems. The output of this network is location and confidence of a prediction. While training, the SSD architecture computes the total loss at the end of each step as a combination of regression loss between the predicted and actual locations and confidence loss of predicted class (equation (6)). is a hyperparameter that balances the influence of the location loss on the total loss. This loss is optimized through the root mean squared loss optimization algorithm [44]. The RMS algorithm at any time uses the gradient of loss and velocity to update the weight (equation (7)). Here, and are hyperparameters for momentum calculation and is a small value close to zero to prevent division by zero error.

5. Experiments and Analysis

This section describes the experimental results of the proposed scheme. The experiment has been performed in two phases. The first phase validates the Kiropter robot performance on different aircraft surfaces and captures the aircraft surface for visual inspection. The second phase involves validation of the detection algorithm with the captured aircraft skin images. These images of defects and stains are captured by operating the robot using a semiautonomous mode. In the semiautonomous mode, the navigation control of the robot is performed manually through teleportation. However, during the semiautonomous mode, the robot avoids the windows and the nose of the plane automatically by using an inductive sensor and also performs the shape change automatically when it moves on the fuselage area.

5.1. Kiropter Robot Tests

The performance of the Kiropter robot was tested in two environments at the RoAR laboratory and ITE, Singapore. In the RoAR laboratory, the platform was tested on the curved aircraft skin, vertical flat, and glass surfaces. In the Institute of Technical Education (ITE) College, Singapore, the Kiropter robot was tested with an actual aircraft, specifically on Boeing 737 and combat aircraft models. These results are shown in Figure 11.

Figure 11: Kiropter robot in operation. The robot has been highlighted using yellow circles.

During the inspection, the robot was controlled through a GUI using Bluetooth communication. Through the GUI, the robot was paused in each stage (where stains and defects were visible) for a few seconds to capture the surface picture with better quality. The captured images are instantaneously sent to the remote inspection console and are also recorded in parallel in a 32 GB SD card present in the robot. The trial is performed in different regions of the aircraft surface including the fuselage section, wings, and bottom of the aircraft surface. Figure 12 shows some of the defect and stain images captured by the Kiropter. These captured images have been used to train and test the detection algorithm.

Figure 12: Captured defect and stain images. (a–c) have stains and (d–f) have defects.
5.2. Results of the Detection Network
5.2.1. Dataset Preparation

The effectiveness of the detection algorithm has been tested manually with Kiropter captured aircraft skin images. This dataset contains about 2200 images from 15 different aircrafts located in ITE, Singapore. The images are balanced across the two classes—stains (mainly from oil and liquid spills) and defects (which include cracks, scratches, and patches). Each image is resized to a resolution. Then, to improve the CNN learning rate and prevent overfitting, data expansion is applied to the captured images. Data expansion involves applying geometrical transformations such as rotation, scaling, and flipping. These images were then preprocessed and then labeled manually.

Standard performance metrics such as accuracy, precision, recall, miss rate, and scores are used to evaluate the model. The dataset is split into 10 sections for performing the -fold (here being 10) cross-validation process. In this process, 9 of these 10 splits are used for training and the remaining is used to evaluate the model. This process is repeated 10 times. -fold cross-validation is used to remove any bias which could appear due to the particular split of training and testing data used. The performance metrics reported in this paper are the mean over these 10 runs. The images reported are from the model with the highest accuracy. The model was trained using the TensorFlow framework on Ubuntu 16.04 with the following hardware configuration Intel Xeon E5-1600 V4 CPU, 64 GB RAM and an NVIDIA Quadro P4000 GPU with 12 GB video memory.

5.2.2. Detection Results

Figures 13 and 14 show the detection results of the proposed algorithm. Here, stain regions are marked as a green rectangular box and defects are marked by a blue rectangular box. The experimental results show that the detection algorithm can able to detect most of the stains and defects in the captured skin image. Table 1 shows the number of detections of each class. There are 140 images from each class for validation of results. There are some cases where neither a stain nor defect is detected. The statistical results, shown in Table 2, indicate that the algorithm detected the defects with an average of 97% confidence level and stains are detected with an average of 94% confidence level. On the workstation used for training, the SSD MobileNet model takes 32 ms for one inference and the preprocessing takes 19 ms, which is a total time of 51 ms. On Jetson Nano, the SSD MobileNet scheme takes about 73 ms per image for a prediction. The enhanced scheme takes about 129 ms, which implies that 56 ms is required for the preprocessing algorithm to enhance the accuracy.

Figure 13: Stain detection results.
Figure 14: Defect detection results.
Table 1: True and false detections. There are 140 images from each class.
Table 2: Detection results.
5.2.3. Comparison Analysis with the Standard Detection Network

The performance of the algorithm has been compared with standard SSD MobileNet (without a preprocessing stage) in terms of the abovementioned performance metrics. Both networks are trained for the same number of steps. Both stain detection and defect detection performance increases when preprocessing is used. In some cases, certain false classifications are avoided when preprocessing is used. This is evident in cases where defects and stains look similar but the difference is enhanced due to preprocessing. Few of these cases are shown in Figure 15. In most cases, the confidence with which the network predicts a certain class is increased for the case of enhanced images. This is because stains and defects can have similar features in most cases but the differences are enhanced when preprocessing is performed. Examples of this case are shown in Figure 16. In cases where the stain is faint, enhanced SSD MobileNet performs better than the standard network, which is shown in Figure 17. This can be attributed to the fact that preprocessing the images exposes subtle features and reduces unwanted background information, which makes it easier for the CNN architecture to identify.

Figure 15: Comparison with SSD MobileNet based on false detection.
Figure 16: Comparison with SSD MobileNet based on prediction confidence.
Figure 17: Comparison with SSD MobileNet based on miss detection.
5.2.4. Comparison with Other Aircraft Skin Inspection Schemes

This section describes the comparative analysis of the proposed algorithm with the existing aircraft surface inspection algorithm. The comparison has been performed based on the detection accuracy of each model. Table 3 shows the detection accuracy of various aircraft skin inspection algorithms based on the conventional method and deep learning scheme. The result indicates that the proposed scheme achieves better accuracy than the Malekzadeh et al. [27] defect inspection scheme and the Siegel and Gunatilake [45] scheme. Moreover, all these works are only focused on defect detection. Since none of these works report stain detection, it is hard to actually compare the performance of the proposed scheme and the reported schemes.

Table 3: Comparison with aircraft inspection schemes.
5.2.5. Comparison with Other Defect Detection Schemes

The effectiveness of the proposed algorithm is further analyzed with the deep learning framework and DAGM 2007 defect image dataset. This DAGM 2007 dataset contains stain, crack, and pitted surfaces which are captured with different lighting conditions, in which the network achieved 93.2% accuracy.

Table 4 shows the detection accuracy of different schemes with the present scheme. Here, SSD MobileNet and the compact CNN scheme are used as a texture-based CNN scheme and AlexNet CNN and Faster RCNN are employed based on the content image dataset. From this table, it can be inferred that our detection algorithm has a strong detection capability on defective images with textured backgrounds and can adopt various defect detection applications. This is specifically true for detecting low-contrast objects such defects and stains. In contrast with traditional methods including AlexNet CNN and Faster RCNN, our method achieves better defect classification accuracy. However, since these networks are trained using different datasets for different applications, the performance cannot be accurately compared. Faster RCNN generally performs better than SSD MobileNet in object detection applications. The disparity in the table can be due to a difference in the number of different types of defects present in each dataset, as well as the preprocessing methods used. The proposed approach also groups all defects into one class, compared to the individual classes in some of the compared datasets, which can lead to better detection results.

Table 4: Comparison with other defect detection schemes.
5.3. Advantages and Limitations

Generally, UAV-based inspection has a lot of advantages over robot-based inspection due to its high mobility [35, 36, 46]. However, robot-based inspection is able to provide a close-up image of the defect or stain compared to UAVs. Also, due to a standard distance between the platform and the surface, the chance of missing a detection due to variable focus or vibration is lesser, compared to UAV-based models [35, 36]. This is shown in Table 4, where the authors report a loss of detection performance when CNN is used in the UAV due to vibration issue. Furthermore, the proposed robotic architecture can be extended to include automated cleaning systems, which can operate to clean the detected stains while also inspecting the surface for defects. The energy consumption is also reduced because the robot does not use an EDF on the upper part of the fuselage. The robot avoids areas of the airplane such as the nose, windows, and antennas, where the plane has sensitive sensors and materials that can be easily damaged by the robot. Also, the robot is designed for larger aircrafts and is not suitable for small airplanes.

SSD MobileNet is a lightweight scheme which can perform real-time detection with the tradeoff of accuracy. Faster RCNN has better detection results but is larger and takes longer time to run. The enhancement of images through preprocessing increases the accuracy of the proposed model, while still being able to perform inference in real time.

6. Conclusion

This work proposed the aircraft surface inspection using an indigenously developed reconfigurable climbing robot (Kiropter) and enhanced visual inspection algorithm. An enhanced SSD MobileNet-based deep learning framework is proposed for detecting stains and defects on the aircraft surface. In the preprocessing stage, a self-filtering-based periodic pattern detection filter was included in the SSD MobileNet deep learning framework to reduce the unwanted background information and enhance the defect and stain features. The feasibility of the proposed method was verified with parts of the aircraft skin in the RoAR lab and real aircrafts in ITE Aerospace, Singapore. The experimental results proved that the developed climbing robot can successfully move around the complex regions of the aircraft including the fuselage and confined area and capture the defect and stain regions. Further, the efficiency of the detection algorithm was verified with the captured image and its results are compared with conventional SSD MobileNet and existing defect detection algorithms. The statistical results show that the proposed enhanced SSD MobileNet framework achieves improved detection accuracy (96.2%) of aircraft surface defects (with an average of 97% confidence level) and stains (with a 94% confidence level). In contrast with conventional SSD MobileNet and other defect detection algorithms, the proposed scheme achieves better detection accuracy which is due to the removal of most of the unwanted background data. In our future work, we plan to test the robot with various aircrafts and also plan to increase the number of defected classification classes like corrosion, scratch, and pitted surface. We also plan to develop algorithms to localize these detected regions. Furthermore, the evaluation of seriousness of defects can be automated as well.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Supplementary Materials

Sample video of the robot Kiropter, for the demonstrative purposes. (Supplementary Materials)

References

  1. Civil Aviation Safety Authority Australia, Airworthiness Bulletin, 2019, January 2019, https://www.casa.gov.au/files/awb-indexpdf.
  2. J. Komorowski, D. Forsyth, D. Simpson, and R. Gould, “Probability of detection of corrosion in aircraft structures,” in RTO AVT workshop–airframe inspection reliability under field/depot conditions, vol. 10, Brussels, Belgium, May 1998.
  3. P. Gunatilake, M. Siegel, A. G. Jordan, and G. W. Podnar, “Image enhancement and understanding for remote visual inspection of aircraft surface,” in Proc. SPIE 2945, Nondestructive Evaluation of Aging Aircraft, Airports, and Aerospace Hardware, Scottsdale, AZ, USA, November 1996. View at Publisher · View at Google Scholar
  4. M. Siegel and P. Gunatilake, “Remote enhanced visual inspection of aircraft by a mobile robot,” in Proc. of the 1998 IEEE Workshop on Emerging Technologies, Intelligent Measurement and Virtual Systems for Instrumentation and Measurement (ETIMVIS98), pp. 49–58, St. Paul, MN, USA, 1998.
  5. I. Jovaňcevíc, S. Larnier, J.-J. Orteu, and T. Sentenac, “Automated exterior inspection of an aircraft with a pan-tilt-zoom camera mounted on a mobile robot,” Journal of Electronic Imaging, vol. 24, no. 6, article 061110, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. P. Gunatilake, M. Siegel, A. G. Jordan, and G. W. Podnar, “Image understanding algorithms for remote visual inspection of aircraft surfaces,” in Proc. SPIE 3029, Machine Vision Applications in Industrial Inspection V, pp. 2–14, San Jose, CA, USA, April 1997. View at Publisher · View at Google Scholar
  7. M. Siegel, P. Gunatilake, and G. Podnar, “Robotic assistants for aircraft inspectors,” IEEE Instrumentation & Measurement Magazine, vol. 1, no. 1, pp. 16–30, 1998. View at Publisher · View at Google Scholar · View at Scopus
  8. B. Chu, K. Jung, C.-S. Han, and D. Hong, “A survey of climbing robots: locomotion and adhesion,” International Journal of Precision Engineering and Manufacturing, vol. 11, no. 4, pp. 633–647, 2010. View at Publisher · View at Google Scholar · View at Scopus
  9. A. F. R. Laboratory, Robotic arm tool poised to save costly inspection time, International Journal of Precision Engineering and Manufacturing, December 2018, https://www.afspc.af.mil/News/Article-Display/Article/1088209/robotic-arm-tool-poised-to-save-costly-inspection-time/.
  10. X. Zhiwei, C. Muhua, and G. Qingji, “The structure and defects recognition algorithm of an aircraft surface defects inspection robot,” in 2009 International Conference on Information and Automation, pp. 740–745, Zhuhai, Macau, China, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. C. Menon, M. Murphy, and M. Sitti, “Gecko inspired surface climbing robots,” in 2004 IEEE International Conference on Robotics and Biomimetics, pp. 431–436, Shenyang, China, August 2004. View at Publisher · View at Google Scholar
  12. R. Lal Tummala, R. Mukherjee, Ning Xi et al., “Climbing the walls [robots],” IEEE Robotics and Automation Magazine, vol. 9, no. 4, pp. 10–19, 2002. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Nansai and R. E. Mohan, “A survey of wall climbing robots: recent advances and challenges,” Robotics, vol. 5, no. 3, p. 14, 2016. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Yuan, X. Wu, Y. Kang, and A. Ben, “Research on reconfigurable robot technology for cable maintenance of cable-stayed bridges in-service,” in 2010 International Conference on Mechanic Automation and Control Engineering, pp. 1019–1022, Wuhan, China, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. A. C. C. de Sousa, D. M. Viana, and C. M. C. e Cavalcante Koike, “Sensors in reconfigurable modular robot for pipeline inspection: design and tests of a prototype,” in 2014 Joint Conference on Robotics: SBR-LARS Robotics Symposium and Robocontrol, pp. 7–12, Sao Carlos, Brazil, October 2014. View at Publisher · View at Google Scholar · View at Scopus
  16. M. R. Jahanshahi, W.-M. Shen, T. G. Mondal, M. Abdelbarr, S. F. Masri, and U. A. Qidwai, “Reconfigurable swarm robots for structural health monitoring: a brief review,” International Journal of Intelligent Robotics and Applications, vol. 1, no. 3, pp. 287–305, 2017. View at Publisher · View at Google Scholar
  17. V. Prabakaran, M. R. Elara, T. Pathmakumar, and S. Nansai, “htetro: A tetris inspired shape shifting floor cleaning robot,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 6105–6112, Singapore, Singapore, May-June 2017. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Ilyas, S. Yuyao, R. E. Mohan, M. Devarassu, and M. Kalimuthu, “Design of sTetro: a modular, reconfigurable, and autonomous staircase cleaning robot,” Journal of Sensors, vol. 2018, Article ID 8190802, 16 pages, 2018. View at Publisher · View at Google Scholar · View at Scopus
  19. M. Yim, K. Roufas, D. Duff, Y. Zhang, C. Eldershaw, and S. Homans, “Modular reconfigurable robots in space applications,” Autonomous Robots, vol. 14, no. 2/3, pp. 225–237, 2003. View at Publisher · View at Google Scholar · View at Scopus
  20. Y.-S. Kwon, E.-J. Jung, H. Lim, and B.-J. Yi, “Design of a reconfigurable indoor pipeline inspection robot,” in 2007 International Conference on Control, Automation and Systems, pp. 712–716, Seoul, South Korea, October 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. A. Le, V. Prabakaran, V. Sivanantham, and R. Mohan, “Modified a-star algorithm for efficient coverage path planning in tetris inspired self-reconfigurable robot with integrated laser sensor,” Sensors, vol. 18, no. 8, p. 2585, 2018. View at Publisher · View at Google Scholar · View at Scopus
  22. A. Le, M. Arunmozhi, P. Veerajagadheswar et al., “Complete path planning for a tetris-inspired self-reconfigurable robot by the genetic algorithm of the traveling salesman problem,” Electronics, vol. 7, no. 12, p. 344, 2018. View at Publisher · View at Google Scholar · View at Scopus
  23. R. Mumtaz, M. Mumtaz, A. B. Mansoor, and H. Masood, “Computer aided visual inspection of aircraft surfaces,” International Journal of Image Processing, vol. 6, no. 1, pp. 38–53, 2012. View at Google Scholar
  24. M. Rice, L. Li, G. Ying et al., Automating the visual inspection of aircraft, https://oar.a-star.edu.sg/jspui/handle/123456789/2336.
  25. A. Ortiz, F. Bonnin-Pascual, E. Garcia-Fidalgo, and J. Company-Corcoles, “Vision-based corrosion detection assisted by a micro-aerial vehicle in a vessel inspection application,” Sensors, vol. 16, no. 12, article 2118, 2016. View at Publisher · View at Google Scholar · View at Scopus
  26. Y. Li, H. Huang, Q. Xie, L. Yao, and Q. Chen, “Research on a surface defect detection algorithm based on MobileNet-SSD,” Applied Sciences, vol. 8, no. 9, article 1678, 2018. View at Publisher · View at Google Scholar · View at Scopus
  27. T. Malekzadeh, M. Abdollahzadeh, H. Nejati, and N.-M. Cheung, “Aircraft fuselage defect detection using deep neural networks,” http://arxiv.org/abs/1712.09213.
  28. X. Tao, D. Zhang, W. Ma, X. Liu, and D. Xu, “Automatic metallic surface defect detection and recognition with convolutional neural networks,” Applied Sciences, vol. 8, no. 9, article 1575, 2018. View at Publisher · View at Google Scholar · View at Scopus
  29. Y.-J. Cha, W. Choi, G. Suh, S. Mahmoudkhani, and O. Büyüköztürk, “Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types,” Computer-Aided Civil and Infrastructure Engineering, vol. 33, no. 9, pp. 731–747, 2018. View at Publisher · View at Google Scholar · View at Scopus
  30. B. Kim and S. Cho, “Automated vision-based detection of cracks on concrete surfaces using a deep learning technique,” Sensors, vol. 18, no. 10, article 3452, 2018. View at Publisher · View at Google Scholar · View at Scopus
  31. R. Zhang, Z. Wang, and Y. Zhang, “Astronaut visual tracking of flying assistant robot in space station based on deep learning and probabilistic model,” International Journal of Aerospace Engineering, vol. 2018, Article ID 6357185, 17 pages, 2018. View at Publisher · View at Google Scholar · View at Scopus
  32. Y.-J. Cha and W. Choi, “Vision-based concrete crack detection using a convolutional neural network,” in Dynamics of Civil Structures, Volume 2, J. Caicedo and S. Pakzad, Eds., Conference Proceedings of the Society for Experimental Mechanics Series, pp. 71–73, Springer International Publishing, Cham, 2017. View at Publisher · View at Google Scholar · View at Scopus
  33. S. Deitsch, V. Christlein, S. Berger et al., “Automatic classification of defective photovoltaic module cells in electroluminescence images,” CoRR abs/1807.02894. http://arxiv.org/abs/1807.02894.
  34. B. Ramalingam, A. Lakshmanan, M. Ilyas, A. le, and M. Elara, “Cascaded machine-learning technique for debris classification in floor-cleaning robot application,” Applied Sciences, vol. 8, no. 12, article 2649. View at Publisher · View at Google Scholar · View at Scopus
  35. D. Kang and Y.-J. Cha, “Damage detection with an autonomous uav using deep learning,” in Proc. SPIE 10598, Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2018, Denver, Colorado, USA, March 2018. View at Publisher · View at Google Scholar · View at Scopus
  36. D. Kang and Y.-J. Cha, “Autonomous UAVs for structural health monitoring using deep learning and an ultrasonic beacon system with geo-tagging,” Computer-Aided Civil and Infrastructure Engineering, vol. 33, no. 10, pp. 885–902, 2018. View at Publisher · View at Google Scholar · View at Scopus
  37. R. Qian, Q. Liu, Y. Yue, F. Coenen, and B. Zhang, “Road surface traffic sign detection with hybrid region proposal and fast r-cnn,” in 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 555–559, Changsha, China, August 2016. View at Publisher · View at Google Scholar · View at Scopus
  38. W. Brendel and M. Bethge, “Approximating CNNs with bag-of-local-features models works surprisingly well on ImageNet,” International conference on learning representations, 2019, https://openreview.net/forum?id=SkfMWhAqYQ. View at Google Scholar
  39. M. Siegel and P. Gunatilake, Robotic Enhanced Visual Inspection of Aircraft Skin, p. 530, 1999, http://www.academia.edu/download/6154878/10.1.1.172.9886.pdf.
  40. W. M. K. C. J. P. C. J. Alberts, C. W. Carroll, and M. W. Siegel, Automated inspection of aircraft, December 2018, http://www.tc.faa.gov/its/worldpac/techrpt/ar97-69.pdf.
  41. J. Shang, T. Sattar, S. Chen, and B. Bridge, “Design of a climbing robot for inspecting aircraft wings and fuselage,” Industrial Robot: An International Journal, vol. 34, no. 6, pp. 495–502, 2007. View at Publisher · View at Google Scholar · View at Scopus
  42. A. G. Howard, M. Zhu, B. Chen et al., “MobileNets: efficient convolutional neural networks for mobile vision applications,” http://arxiv.org/abs/1704.04861.
  43. W. Liu, D. Anguelov, D. Erhan et al., “SSD: single shot multibox detector,” in Computer Vision – ECCV 2016. ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., vol. 9905 of Lecture Notes in Computer Science, pp. 21–37, Springer, Cham, 2016. View at Publisher · View at Google Scholar · View at Scopus
  44. T. Tieleman and G. Hinton, Lecture 6.5-rmsprop, coursera: neural networks for machine learning, University of Toronto, Technical Report.
  45. M. Siegel and P. Gunatilake, “Enhanced remote visual inspection of aircraft skin,” in Proc. Intelligent NDE Sciences for Aging and Futuristic Aircraft Workshop, pp. 101–112, Citeseer, 1997. View at Google Scholar
  46. H. Shakhatreh, A. H. Sawalmeh, A. al-Fuqaha et al., “Unmanned Aerial Vehicles (UAVs): a survey on civil applications and key research challenges,” IEEE Access, vol. 7, pp. 48572–48634, 2019. View at Publisher · View at Google Scholar