Abstract

Aircraft surface inspection includes detecting surface defects caused by corrosion and cracks and stains from the oil spill, grease, dirt sediments, etc. In the conventional aircraft surface inspection process, human visual inspection is performed which is time-consuming and inefficient whereas robots with onboard vision systems can inspect the aircraft skin safely, quickly, and accurately. This work proposes an aircraft surface defect and stain detection model using a reconfigurable climbing robot and an enhanced deep learning algorithm. A reconfigurable, teleoperated robot, named as “Kiropter,” is designed to capture the aircraft surface images with an onboard RGB camera. An enhanced SSD MobileNet framework is proposed for stain and defect detection from these images. A Self-filtering-based periodic pattern detection filter has been included in the SSD MobileNet deep learning framework to achieve the enhanced detection of the stains and defects on the aircraft skin images. The model has been tested with real aircraft surface images acquired from a Boeing 737 and a compact aircraft’s surface using the teleoperated robot. The experimental results prove that the enhanced SSD MobileNet framework achieves improved detection accuracy of aircraft surface defects and stains as compared to the conventional models.

1. Introduction

Aircraft skin inspection is essential under the Corrosion Prevention and Control Program (CPCP) to ensure the aircraft structural integrity [1]. Under CPCP, the aircraft should be kept thoroughly clean of deposits containing contaminating substances such as oil, grease, dirt, and other organic or foreign materials to prevent the aircraft from the potential risk of corrosion, degradation of seals, and plastic components. Furthermore, after scheduled cleaning of the aircraft, it should be thoroughly inspected to identify uncleaned areas (typically stains) and surface defects. During cleaning, the cleaning agent may build upon these defects, which can increase the damage caused [1, 2].

Human visual inspection is, by far, the most widely used method in aircraft surface inspection [3, 4] as per CPCP. However, the current practice of getting on the aircraft body in order to carry out surface inspection raises safety issues for the inspectors. It is also time-consuming and suffers at times from being ineffective due to inspector fatigue or boredom. Automated aircraft skin inspection systems based on computer vision techniques could allow the inspector to safely, quickly, and accurately perform the necessary visual inspection [3, 5, 6]. Robotic assistance for inspection of the aircraft skin has been investigated in [4, 79]. These systems need a flexible teleoperated robotic platform with different locomotion capabilities to access the aircraft surface and an optimal detection algorithm for automatically detecting stains and defects on the aircraft skin.

Designing a robotic inspection platform with good adherence, mobility, and flexibility is a key challenge. Typically, fixed morphology climbing robots are used for aircraft inspection. They use magnetic devices, vacuum suction cups, or propeller force to adhere and climb the aircraft surface [8, 1012]. However, these climbing robots face difficulties when accessing confined areas due to their less flexible design [8, 9, 13]. They also find it hard to climb overlapped joints and fuselage, thereby reducing their coverage [9]. Reconfigurable robotic platforms can access confined spaces, rough terrains, and hard to navigate surfaces through dynamically changing their shape and functionality. Recently, the reconfigurable robotic platform has been widely developed and deployed in the various applications including inspection [14, 15], repairs [16], cleaning [17, 18], and space applications [19]. Kwon et al. [20] developed a reconfigurable robot for inspection of in-house pipelines and sewage pipeline inspection. In [14], authors have developed a reconfigurable robot for bridge inspection. Tromino and tetrominoes tiling algorithm-based reconfigurable shape-changing robots have also been developed for floor cleaning [21, 22]. The floor cleaning robot achieve better floor area coverage than fixed morphology robots.

Another constraint for the aircraft visual inspection technique is developing detection algorithm to recognize the stains and defects automatically. In the last decade, various visual inspection algorithms have been applied to the field of aircraft inspection and surface defect detection. These visual inspection algorithms are classified into two types, such as traditional image processing-based visual inspection [5, 2325] and machine learning techniques [2631]. Typically, the traditional algorithms use rudimentary characteristics like edges, brightness, histogram, and spectral feature to detect and segment defects [28]. However, these image processing methods work well only in controlled environments and often fail in complex real-world scenarios due to noise and complex backgrounds. Moreover, for various defects, thresholds often used in these algorithms need to be adjusted or it may even be necessary to redesign the algorithms [28]. CNN-based algorithms have been successfully implemented in defect detection and inspection applications including surface crack and defect detection [2628, 30, 32], solar panel inspection [33], and cleaning inspection [34]. Cha and Choi proposed the use of CNNs [32] and Faster RCNN method [29] for better crack detection in concrete and metal surfaces. The author suggests the use of Faster RCNN, which has a more optimized bounding box for crack localization. Further, the author also proposes an UAV system for tagging and localization of concrete cracks [35, 36].

Typically, the key challenge of deep learning algorithms for this application is the requirement of a large amount of image data and an optimal preprocessing algorithm. Preprocessing plays a vital role in helping the network in recognizing the low-contrast objects and differentiating between objects with similar features, such as dirt, stains, and scratches on the aircraft surface, all at a small cost, which is negligible compared to increasing the complexity of the CNN architecture [26]. Li et al. included the preprocessing algorithm with SSD MobileNet in surface defect detection application. Here, the authors use Gaussian filtering, edge detection, and circle Hough detection algorithms to enhance the edge quality and filter out the noise from the image [26]. In [37], edge information for improving the Faster RCNN detection accuracy in traffic sign detection application was used. The author proved that the edge information has improved the detection accuracy and recall rate of Faster RCNN. Brendel and Bethge developed a CNN model named as BagNets which uses a local image feature to achieve a better detection ratio and good feature sensitivity [38]. In [28], Tao et al. prove that their compact CNNs have achieve better detection in the texture-based trained network than the content image.

In order to overcome the shortcomings mentioned earlier, this paper proposes a reconfigurable suction-based robot named as “Kiropter” for aircraft inspection along with an enhanced SSD MobileNet deep learning framework for recognizing and classifying the stains and defects on the aircraft surface. The reconfigurable robot is capable of accessing confined areas, overlapped joints, and fuselage on the aircraft body by dynamically changing its shape and functionality. The self-filtering-based periodic pattern detection filter is adopted as with the SSD MobileNet deep learning framework to effectively enhance the recognition results in low-contrast stain and defective areas in the aircraft skin image. This article is organized as follows: related work is reported in Section 2. Section 3 describes the robotic architecture and functionality. The enhanced deep learning-based inspection model is described in Section 4. The experimental results are given in Section 5. Finally, conclusions and future work are provided in Section 6.

Very few works on aircraft skin inspection exist in the literature. Few of these works focus on developing a robotic platform for inspection and others focused on the detection algorithm. Siegel and Gunatilake [39] developed an aircraft surface inspection model based on the crown inspection mobile platform (CIMP) which captures images of the aircraft body and employs a computer-aided visual inspection algorithm for recognizing defects on the surface. The visual algorithm comprises of the wavelet-based image enhancement scheme to highlight the cracks and three-layered feedforward neural network consisting of ten inputs, thirty hidden layers, and two outputs to classify the cracks and corrosion. Enhanced remote visual inspection of the aircraft skin based on the CIMP robot system using an improved NN algorithm is proposed by Alberts et al. [40]. Here, the author designs a three-layer network (input, hidden, and output layers) with 169 input node-based NN for differentiating healthy and cracked areas of the aircraft surface. Automated aircraft visual inspection is reported by Rice et al. [24], where the authors use the depth camera fitted on the roof to scan the aircraft surface. This is followed by a contour fitting algorithm to visualize the defects present on the aircraft. Mumtaz et al. [23] examined three image processing algorithms to differentiate between cracks and scratches on the aircraft skin. The authors tested the neural network, contourlet transform (CT) with a dot product classifier (DPC), and discrete cosine transform (DCT) with DPC and the combination of DCT and CT with DPC. Among the three DCT, CT with DPC combined schemes achieves a higher recognition rate. Jovaňcevíc et al. [5] developed an automated aircraft exterior inspection model for the autonomous mobile collaborative robot (cobot) using 2D image processing methods. The authors use a kernel mask to remove the noise and enhance the texture on aircraft surface regions. The Hough transform (HT) and edge detection circle (EDC) algorithms are used to extract the geometrical shapes of objects on the aircraft surface (oxygen bay, radome latch, air inlet vent, engine, and pitot probe), and a CAD model is used to inspect the state change. Shang et al. [41] developed a climbing robot for aircraft wing and fuselage inspection. The developed model was designed for accomplishing the various nondestructive testing including eddy current inspection for surface cracks and subsurface corrosion and thermographic inspection to detect loose rivets. In [9], a snake-arm robot system is developed for aircraft Remote Access Nondestructive Evaluation (RANDE). The robot can reach into tight spaces in aircraft wings and perform crack and defect inspection. Aircraft inspection with the reconfigurable robot and enhanced deep learning scheme is a novel approach and has lots of potential in research.

3. Proposed Method

The functional block diagram of the proposed visual inspection model is shown in Figure 1. It comprises of an inspection robot (Kiropter) and the enhanced inspection algorithm. This section first describes the structure and hardware of the robot followed by the algorithm used for stain and defect detection.

3.1. General Overview

The Kiropter is a semiautonomous teleoperated differential 8W 2D robotic platform as shown in Figure 1. This mobile robot is capable of navigating around the surface of the aircraft by utilizing its reconfigurable nature. It is built on polylactic acid material (PLA) for the structure, acrylic of 5 mm for the base, and thermoplastic polyurethane (TPU) for the electric ducted fan (EDF) holders. The platform is 450 mm long and 200 mm wide and approximately 2.21 kg in total weight. The wheels are made up of soft and high-friction rubber for increasing of the traction on the airplane surface during locomotion. The robot is powered with 11.1 V, 42000 mAh 45 cc LIPO batteries parallelly connected to the units.

3.2. Hardware Architecture

Figure 2 shows the hardware architecture of the Kiropter robot. It comprises of four functional units including the central control unit (CCU), locomotive system, EDF system, and vision system. The robot communicates wirelessly and can be controlled through a graphical user interface (GUI). The functional description of each unit is described as follows.

3.3. Central Control Unit

The central control unit (CCU) is powered with an Arduino Mega 2560 microcontroller. It handles the wireless communication interface and generates the required control signals to the locomotion unit and shape changing unit according to the control flow chart as shown in Figure 3.

3.4. Locomotive System

Locomotion of the Kiropter robot has been achieved through three functions including adhesion, rolling, and transformation. Electric turbines are used for adhesion, and servomotors are used for navigation around the surface of the aircraft and for transformation. Electric turbines are controlled from the CCU through the HV75A brushless motor controller. The CCU generates the required PWM signal to the driving unit (HV75A) for adjustment of the turbine speed. For transformation, the servomotor is placed in the central articulation of the robot. It has a torque of 1.5 Nm and is controlled asynchronously through the CCU. Through servomotors, the robot can reconfigure its shape to different angles: 0°, 90° (on orthogonal surfaces), 45°, −5°, and −7° (Figure 4). This enables it to move on difficult surfaces for traversal, specifically between the aircraft wing and the body through the curvature of the aircraft surface. In that way, the robot keeps its wheels in contact with the surface, which is critical for the stability of the platform. The position of the robot can be estimated considering the feedback of the angular position of the servomotors of the wheels and the angular position of the central joint. In addition, a wheel encoder and two inertial measurement unit (IMU) sensors are fixed in the front and rear ends of the robot to estimate the robot position and orientation on the airplane structure. Through wheel encoders and IMU sensor data, the stain position and orientation information has been estimated in the inspection stage.

3.5. Electric Ducted Fan (EDF) System

An EDF is an impeller driven by a brushless motor mounted inside a circular duct. It has a thrust holding 3.5 kg. The EDF receives its power directly from the batteries. The speed of the EDF and the energy used are controlled by changing the pulses from the CCU independently for each EDF using the electronic speed control (ESC) (Figure 5). Through the EDF system, the robot is able to hold on and inspect the curvature and lower surface of the airplanes.

3.6. Vision System

The vision system consists of a WiFi-enabled HD 1080 p camera (HDDB 10AD). The camera is placed 72 mm above the surface of the plane, in the center of the body of the robot. The camera is inclined at an angle of 30° from the - plane of the robot (Figures 6 and 7), so it is estimated that the center of the camera at the height of 72 mm has a range of ≈460 mm on flat surfaces. The opening angle of the camera is 60°. So, the point of vision starts at 23 mm from the robot in the - plane.

4. Enhanced Deep Learning Framework

This section describes the vision-based aircraft surface stain and defect detection based on the enhanced deep learning technique as shown in Figure 8. The framework has two phases which are preprocessing and detection.

4.1. Image Preprocessing

Generally, the backgrounds present in training and test image data can affect the learning and recognition abilities of all detection algorithms [26, 28, 38]. Through preprocessing, the effect of these backgrounds can be reduced, which can enhance the recognition accuracy at a relatively small cost. Preprocessing can also help enhance weak edges and features which the model may otherwise find difficult to learn. In view of these, a self-filtering-based periodic pattern detection filter and Sobel edge detection technique are adopted. The periodic pattern detection filter uses the self-filtering technique to suppress unwanted background patterns present in these images. This is based on the property that in the spatial domain, a periodic pattern on the captured image will have distinct peaks in the frequency domain. The self-filtering techniques automatically change the filter function according to the background of the image. It computes the appropriate filter function through computing the magnitude of the Fourier transformed image. After this, a Sobel edge detector is used to enhance the weak edges present in the image. The algorithm for the preprocessing stage is described in Algorithm 1.

Algorithm 1. Preprocessing algorithm.
Data: grayscale image (pixel coordinates )
Result:
Step 1: transform the image to a frequency domain (coordinates ) using 2D FFT then apply the FFT shift Step 2: apply the log absolute function on the Fourier transformed source image to generate the amplitude image Step 3: compute the self-filtering function from the amplitude image Step 4: suppress the periodic patterns in the frequency image using self-filtering function Step 5: transform the filtered image to the image space using inverse Fourier transform Step 6: edge enhancement has been performed in this step. Due to the strong filtering effect on the prior stage, the edges of the stains and defects are slightly blurred which may affect the detection accuracy of the algorithm. Hence, the Sobel filter is adopted to enhance the edges of the defect and stain regions on the periodic pattern suppressed image.

4.2. SSD MobileNet Detection Network

SSD MobileNet is an object detection framework which is trained to detect and classify the defects and stains on the captured image. Here, MobileNet v2 is the base network utilized to extract the high-level feature from the images for classification and detection and SSD is a detection model which uses MobileNet feature map outputs and convolution layers of different sizes to classify and detect bounding boxes through regression. Connection of MobileNet and SSD is shown in Figure 9.

4.2.1. MobileNet V2 Architecture

MobileNet v2 [42] is a lightweight feature extractor that can be widely used for real-time detection applications and low-power-embedded systems. Figure 10 shows the functional block diagram of the MobileNet v2 architecture. MobileNet v2 uses residual units with bottleneck architecture for convolution module connection. The MobileNet v2 architecture comprises of three convolution layers including a expansion layer, depth-wise convolution layer, and projection layer. Here, the expansion layer is employed to expand (default expansion factor is 6) the number of the channel in the data before going to depth-wise convolution, the next layer is a depth-wise convolution layer which filters the input, and the last layer is a projection layer which reduces the number of the channel by a factor of 6. It pretty much does the opposite of the expansion layer. Habitually, each convolution layer has batch normalization function and ReLU6 activation function. However, the output of the projection layer does not have an activation function applied to it.

4.2.2. SSD Architecture

SSD [43] is an object localizer which uses the output (feature map) from the final few layers of the feature extractor (MobileNet v2) to predict the location of different objects. SSD passes the input feature map through a series of convolution layers of varying sizes, which decrease in scale through the network. These layers allow it to detect objects of varying sizes. Each convolution layer also produces a fixed set of predictions, which contributes to a lower computation cost. Also, unlike traditional localizers, SSD chooses the bounding box for each prediction from a fixed pool of sizes, which drastically reduces the computation time. This enables the SSD architecture to be used in real-time detection systems. The output of this network is location and confidence of a prediction. While training, the SSD architecture computes the total loss at the end of each step as a combination of regression loss between the predicted and actual locations and confidence loss of predicted class (equation (6)). is a hyperparameter that balances the influence of the location loss on the total loss. This loss is optimized through the root mean squared loss optimization algorithm [44]. The RMS algorithm at any time uses the gradient of loss and velocity to update the weight (equation (7)). Here, and are hyperparameters for momentum calculation and is a small value close to zero to prevent division by zero error.

5. Experiments and Analysis

This section describes the experimental results of the proposed scheme. The experiment has been performed in two phases. The first phase validates the Kiropter robot performance on different aircraft surfaces and captures the aircraft surface for visual inspection. The second phase involves validation of the detection algorithm with the captured aircraft skin images. These images of defects and stains are captured by operating the robot using a semiautonomous mode. In the semiautonomous mode, the navigation control of the robot is performed manually through teleportation. However, during the semiautonomous mode, the robot avoids the windows and the nose of the plane automatically by using an inductive sensor and also performs the shape change automatically when it moves on the fuselage area.

5.1. Kiropter Robot Tests

The performance of the Kiropter robot was tested in two environments at the RoAR laboratory and ITE, Singapore. In the RoAR laboratory, the platform was tested on the curved aircraft skin, vertical flat, and glass surfaces. In the Institute of Technical Education (ITE) College, Singapore, the Kiropter robot was tested with an actual aircraft, specifically on Boeing 737 and combat aircraft models. These results are shown in Figure 11.

During the inspection, the robot was controlled through a GUI using Bluetooth communication. Through the GUI, the robot was paused in each stage (where stains and defects were visible) for a few seconds to capture the surface picture with better quality. The captured images are instantaneously sent to the remote inspection console and are also recorded in parallel in a 32 GB SD card present in the robot. The trial is performed in different regions of the aircraft surface including the fuselage section, wings, and bottom of the aircraft surface. Figure 12 shows some of the defect and stain images captured by the Kiropter. These captured images have been used to train and test the detection algorithm.

5.2. Results of the Detection Network
5.2.1. Dataset Preparation

The effectiveness of the detection algorithm has been tested manually with Kiropter captured aircraft skin images. This dataset contains about 2200 images from 15 different aircrafts located in ITE, Singapore. The images are balanced across the two classes—stains (mainly from oil and liquid spills) and defects (which include cracks, scratches, and patches). Each image is resized to a resolution. Then, to improve the CNN learning rate and prevent overfitting, data expansion is applied to the captured images. Data expansion involves applying geometrical transformations such as rotation, scaling, and flipping. These images were then preprocessed and then labeled manually.

Standard performance metrics such as accuracy, precision, recall, miss rate, and scores are used to evaluate the model. The dataset is split into 10 sections for performing the -fold (here being 10) cross-validation process. In this process, 9 of these 10 splits are used for training and the remaining is used to evaluate the model. This process is repeated 10 times. -fold cross-validation is used to remove any bias which could appear due to the particular split of training and testing data used. The performance metrics reported in this paper are the mean over these 10 runs. The images reported are from the model with the highest accuracy. The model was trained using the TensorFlow framework on Ubuntu 16.04 with the following hardware configuration Intel Xeon E5-1600 V4 CPU, 64 GB RAM and an NVIDIA Quadro P4000 GPU with 12 GB video memory.

5.2.2. Detection Results

Figures 13 and 14 show the detection results of the proposed algorithm. Here, stain regions are marked as a green rectangular box and defects are marked by a blue rectangular box. The experimental results show that the detection algorithm can able to detect most of the stains and defects in the captured skin image. Table 1 shows the number of detections of each class. There are 140 images from each class for validation of results. There are some cases where neither a stain nor defect is detected. The statistical results, shown in Table 2, indicate that the algorithm detected the defects with an average of 97% confidence level and stains are detected with an average of 94% confidence level. On the workstation used for training, the SSD MobileNet model takes 32 ms for one inference and the preprocessing takes 19 ms, which is a total time of 51 ms. On Jetson Nano, the SSD MobileNet scheme takes about 73 ms per image for a prediction. The enhanced scheme takes about 129 ms, which implies that 56 ms is required for the preprocessing algorithm to enhance the accuracy.

5.2.3. Comparison Analysis with the Standard Detection Network

The performance of the algorithm has been compared with standard SSD MobileNet (without a preprocessing stage) in terms of the abovementioned performance metrics. Both networks are trained for the same number of steps. Both stain detection and defect detection performance increases when preprocessing is used. In some cases, certain false classifications are avoided when preprocessing is used. This is evident in cases where defects and stains look similar but the difference is enhanced due to preprocessing. Few of these cases are shown in Figure 15. In most cases, the confidence with which the network predicts a certain class is increased for the case of enhanced images. This is because stains and defects can have similar features in most cases but the differences are enhanced when preprocessing is performed. Examples of this case are shown in Figure 16. In cases where the stain is faint, enhanced SSD MobileNet performs better than the standard network, which is shown in Figure 17. This can be attributed to the fact that preprocessing the images exposes subtle features and reduces unwanted background information, which makes it easier for the CNN architecture to identify.

5.2.4. Comparison with Other Aircraft Skin Inspection Schemes

This section describes the comparative analysis of the proposed algorithm with the existing aircraft surface inspection algorithm. The comparison has been performed based on the detection accuracy of each model. Table 3 shows the detection accuracy of various aircraft skin inspection algorithms based on the conventional method and deep learning scheme. The result indicates that the proposed scheme achieves better accuracy than the Malekzadeh et al. [27] defect inspection scheme and the Siegel and Gunatilake [45] scheme. Moreover, all these works are only focused on defect detection. Since none of these works report stain detection, it is hard to actually compare the performance of the proposed scheme and the reported schemes.

5.2.5. Comparison with Other Defect Detection Schemes

The effectiveness of the proposed algorithm is further analyzed with the deep learning framework and DAGM 2007 defect image dataset. This DAGM 2007 dataset contains stain, crack, and pitted surfaces which are captured with different lighting conditions, in which the network achieved 93.2% accuracy.

Table 4 shows the detection accuracy of different schemes with the present scheme. Here, SSD MobileNet and the compact CNN scheme are used as a texture-based CNN scheme and AlexNet CNN and Faster RCNN are employed based on the content image dataset. From this table, it can be inferred that our detection algorithm has a strong detection capability on defective images with textured backgrounds and can adopt various defect detection applications. This is specifically true for detecting low-contrast objects such defects and stains. In contrast with traditional methods including AlexNet CNN and Faster RCNN, our method achieves better defect classification accuracy. However, since these networks are trained using different datasets for different applications, the performance cannot be accurately compared. Faster RCNN generally performs better than SSD MobileNet in object detection applications. The disparity in the table can be due to a difference in the number of different types of defects present in each dataset, as well as the preprocessing methods used. The proposed approach also groups all defects into one class, compared to the individual classes in some of the compared datasets, which can lead to better detection results.

5.3. Advantages and Limitations

Generally, UAV-based inspection has a lot of advantages over robot-based inspection due to its high mobility [35, 36, 46]. However, robot-based inspection is able to provide a close-up image of the defect or stain compared to UAVs. Also, due to a standard distance between the platform and the surface, the chance of missing a detection due to variable focus or vibration is lesser, compared to UAV-based models [35, 36]. This is shown in Table 4, where the authors report a loss of detection performance when CNN is used in the UAV due to vibration issue. Furthermore, the proposed robotic architecture can be extended to include automated cleaning systems, which can operate to clean the detected stains while also inspecting the surface for defects. The energy consumption is also reduced because the robot does not use an EDF on the upper part of the fuselage. The robot avoids areas of the airplane such as the nose, windows, and antennas, where the plane has sensitive sensors and materials that can be easily damaged by the robot. Also, the robot is designed for larger aircrafts and is not suitable for small airplanes.

SSD MobileNet is a lightweight scheme which can perform real-time detection with the tradeoff of accuracy. Faster RCNN has better detection results but is larger and takes longer time to run. The enhancement of images through preprocessing increases the accuracy of the proposed model, while still being able to perform inference in real time.

6. Conclusion

This work proposed the aircraft surface inspection using an indigenously developed reconfigurable climbing robot (Kiropter) and enhanced visual inspection algorithm. An enhanced SSD MobileNet-based deep learning framework is proposed for detecting stains and defects on the aircraft surface. In the preprocessing stage, a self-filtering-based periodic pattern detection filter was included in the SSD MobileNet deep learning framework to reduce the unwanted background information and enhance the defect and stain features. The feasibility of the proposed method was verified with parts of the aircraft skin in the RoAR lab and real aircrafts in ITE Aerospace, Singapore. The experimental results proved that the developed climbing robot can successfully move around the complex regions of the aircraft including the fuselage and confined area and capture the defect and stain regions. Further, the efficiency of the detection algorithm was verified with the captured image and its results are compared with conventional SSD MobileNet and existing defect detection algorithms. The statistical results show that the proposed enhanced SSD MobileNet framework achieves improved detection accuracy (96.2%) of aircraft surface defects (with an average of 97% confidence level) and stains (with a 94% confidence level). In contrast with conventional SSD MobileNet and other defect detection algorithms, the proposed scheme achieves better detection accuracy which is due to the removal of most of the unwanted background data. In our future work, we plan to test the robot with various aircrafts and also plan to increase the number of defected classification classes like corrosion, scratch, and pitted surface. We also plan to develop algorithms to localize these detected regions. Furthermore, the evaluation of seriousness of defects can be automated as well.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Supplementary Materials

Sample video of the robot Kiropter, for the demonstrative purposes. (Supplementary Materials)