Advances in Civil Engineering

Advances in Civil Engineering / 2021 / Article
Special Issue

Leveraging Big Data in Construction Management

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5598690 | https://doi.org/10.1155/2021/5598690

Ren-Yi Kung, Nai-Hsin Pan, Charles C.N. Wang, Pin-Chan Lee, "Application of Deep Learning and Unmanned Aerial Vehicle on Building Maintenance", Advances in Civil Engineering, vol. 2021, Article ID 5598690, 12 pages, 2021. https://doi.org/10.1155/2021/5598690

Application of Deep Learning and Unmanned Aerial Vehicle on Building Maintenance

Academic Editor: Wen Yi
Received25 Jan 2021
Revised22 Mar 2021
Accepted13 Apr 2021
Published20 Apr 2021

Abstract

Several natural and human factors are responsible for the defacement of the external walls and tiles of buildings, and the related deterioration can be a public safety hazard. Therefore, active building maintenance and repair processes are essential for ensuring building sustainability. However, conventional inspection methods are time-, cost-, and labor-intensive processes. Therefore, herein, this study proposes a convolutional neural network (CNN) model for image-based automated detection and localization of key building defects (efflorescence, spalling, cracking, and defacement). Based on a pretrained CNN VGG-16 classifier, this model applies class activation mapping for object localization. After identifying its limitations in real-life applications, this study determined the model’s robustness and ability to accurately detect and localize defects in the external wall tiles of buildings. For real-time detection and localization, this study applied this model by using mobile devices and drones. The results show that the application of deep learning with UAV can effectively detect various kinds of external wall defects and improve the detection efficiency.

1. Introduction

Changes in customer preference may negatively affect building sustainability, well-being, and safety and may eventually increase competitiveness in the market. For proactive and prompt building maintenance and repair work, customers seek quick, effective building monitoring approaches to avoid severe damage and unnecessary expenditure [1]. Conventional approaches for examining building structures typically require the involvement of building surveyors who conduct assessments of building elements. These assessments include lengthy site inspection for systematic recording of the building elements’ physical condition on the basis of note-taking, photographs, drawings, and customer-supplied information [2], followed by analysis of the collected data and writing of a health assessment report of the building. The components of this report include the assessed building’s current state, recent updates, maintenance and repair records, and future long-term repair cost estimates [3]. However, this approach is a time-, labor-, and cost-intensive process and can endanger the surveyors’ health and safety, particularly when the building to be assessed is a mid- to high-rise structure.

Convolutional neural networks (CNNs) have been applied to detect the deterioration of many structures such as roads, bridges, and tunnels but have rarely been employed to detect deterioration of building external walls [46]. Moreover, unmanned aerial vehicles (UAVs) have wide applications in deterioration detection. Consequently, a UAV-CNN combination for external wall deterioration detection could have practical applications, ensuring surveyor safety.

In this study, we focused on the automated image-based detection and localization of key defects (efflorescence, spalling, cracking, and defacement) in the external wall tiles of buildings. However, this study was only a pilot study and thus has a few limitations: (1) the model could not consider multiple defect types simultaneously; in other words, all the considered images belonged to only one category; (2) the model considered only images with visible defects.

Herein, this study reports a CNN application for the automated assessment of the external wall tile condition of buildings, with a brief discussion of the method for selecting the most common defects of these tiles. First, we provide a brief overview of various applications of CNNs, including deep learning techniques, for resolving computer vision-related problems, followed by a description of the theoretical basis for the current study. This research proposes a model for detection and localization that is based on transfer learning, involving the use of VGG-16 to execute feature extraction as well as feature classification. Next, the localization problem and the class activation mapping (CAM) technique—incorporated within the defect localization model—are discussed. Subsequently, we discuss the employed dataset, the developed model, and the obtained results, finally followed by conclusions and directions for future studies.

2. Literature Review

2.1. Factors Leading to Building Deterioration

Building lifespan can vary from decades to centuries. In general, building durability can be increased through constant protection, repair, and maintenance activities [7, 8]. The deterioration rate and degree differ among building components, with construction design, material, method, construction quality, and environment being the crucial influencing factors [9]. Several factors leading to building deterioration may be divided into the following categories: natural environment (temperature, relative humidity, sunshine, wind, and water), natural disasters (earthquakes and typhoons), and human factors (design, construction, users, management, and maintenance) [1013].

2.2. Building External Wall Tile Defects and Their Types

External wall tile defects not only influence the overall appearance of buildings but also endanger public safety; for instance, they may lead to injuries due to their falling. External wall tile defects can be roughly divided into five types: defacement, efflorescence, cracking, spalling, and bulging. Of these, defacement, efflorescence, cracking, and spalling have been the main focus of most studies:(1)Defacement. Defacement, the most significant and common type of external wall tile deterioration in buildings, is closely related to the architectural shape and design of a building and long-term influence of wind and rain on it [14]. Several major factors result in the defacement of external wall tiles. For instance, when rebar is exposed due to external wall cracks, water containing rust from the corroded iron flows out of the walls, defacing the affected areas. Moreover, installation of accessories can damage external wall tiles, thus promoting algal and fungal growth on the affected walls.(2)Efflorescence. Efflorescence—commonly known as whiskering, saltpetering, or “wall cancer”—often affects the hollow bricks of building finishes, joints of external wall tiles, or joints of stone veneers. Efflorescence prevention in cement mortar or concrete-based structures is impossible.(3)Cracking. The main causes of external wall cracking include overloading of buildings, uneven land subsidence, and violent shaking during earthquakes [15]. The drying shrinkage of external wall concrete, corrosion expansion of rebar, secondary construction of external wall accessories, and man-made disasters of fire and explosion can aggravate this cracking. Furthermore, tile breakage can lead to entry of rainwater into the main bodies of buildings, resulting in internal and external structural deterioration. Hence, cracks on a building’s facade can influence the building’s appearance and cause rainwater invasion, possibly leading to inconvenience in daily life or loss of property or even affecting building safety and durability.(4)Spalling. Spalling is characterized by falling off of surface decorative materials (e.g., tiles and coating) due to reduction in adhesive strength, aging of cement mortar and concrete, poor tile quality, high temperature caused by fire, or natural forces (e.g., strong wind and violent shaking during earthquakes) [1618].(5)Bulging. Bulging mainly occurs between concrete and the base cement mortar. Gaps form between the layers of cement mortar and surfaces of external wall tiles, resulting in material separation. Long-term changes in temperature or humidity lead to a reduction in adhesive strength and separation of adhesive interfaces for various adhesives.

2.3. UAVs for Building Deterioration Detection

Currently, UAVs are widely used in construction for applications that can be broadly divided into six areas: (1) building inspection, where UAVs are used for data collection to assess the current building condition [1922]; (2) damage assessment, where the data collected by UAVs are used to assess the damage to buildings after disasters [2326]; (3) site survey and drawings, where UAVs are used to obtain the spatial scope of a survey to make two- or three-dimensional drawings [2729]; (4) safety inspection, where construction sites are frequently assessed according to safety standards [30, 31]; (5) schedule monitoring, where the data (mainly visual data) collected by UAVs are used to monitor construction schedules [32, 33]; and (6) other applications, which include building maintenance [34], 3D building reconstruction [3538], material tracking, and air volume measurement [39]. Many studies have reported that UAVs improve work efficiency, reduce cost incurred, and increase convenience [4043].

The methods of building deterioration detection include visual assessment, percussion-based identification, rebound intensity assessment, ultrasonic wave propagation assessment, pull-out testing, infrared thermography, and UAV use [4446]. Compared with other methods, the application of UAV is a more efficient method to collect huge amount of building data [47, 48].

In addition to deterioration detection, UAV can be used in environment monitoring, traffic management, pollution monitoring, and security [4951]. UAV is also an important emerging technology to develop sustainable communities [52].

2.4. CNN Use for Building Deterioration Detection

With the development of deep learning, the applications of automatic defect detection on community infrastructures and built environment are increasing. CNNs have been used for rapid structural damage detection and maintenance cost estimation after a serious earthquake so as to provide a reference for owners and decision-makers to make accurate and timely risk management decisions [53]. Region-based CNN (R-CNN) and faster R-CNN have also been used for road damage detection and classification [54]. Other CNN applications include the detection of concrete cracks [5557], automated detection of deformation at the bottom of steel box girders of long-span bridges [58], and automated detection of building types in street images [59]. Besides, CNNs have also gradually used in building external wall defect detection. Agyemang and Bader applied a CNN for detecting cracks on the building external walls and assessing the defects therein [3]. Perez et al. also used CNNs to detect the building defects [9]. As shown in the related researches, VGG-16 and CAM are the commonly used methods in the application of building defect detection.

In summary, although deep learning has been used in many engineering fields [60, 61], it has less been used for detecting external wall deterioration. Moreover, integrating UAV and deep learning applications may increase the practical value of automated external wall deterioration detection.

3. Materials and Methods

This study developed a deep learning model with the ability to classify defects, namely, efflorescence, spalling, cracking, and defacement, in the external wall tiles of buildings. By applying CNNs, we identified the related limitations and challenges based on the nature of not only the defects to be investigated but also the surroundings: images showing the defect types of different external wall tile sources were collected first, and then, the data were appropriately cut and resized; the obtained dataset was used to train the network model after completion. Next, by using a transfer learning technique with a pretrained VGG-16 model in ImageNet as our model, this study customized and initialized the weights. Subsequently, this study used a separate set of images, not seen by the trained model thus far, to validate and examine the trained model’s robustness. Finally, this study applied CAM and addressed the localization problem.

3.1. Dataset

All external wall tile images were obtained using mobile phones, handheld cameras, and drones; thus, they had differences in resolution and size. Accordingly, to increase the study dataset size, the obtained images were sliced into images with a resolution of 224 × 224 and 3024 × 4032 pixels. In total, 5680 images were used as the training dataset for our model, all of which were labeled and categorized as efflorescence (n = 1382), spalling (n = 1386), cracking (n = 1551), and defacement (n = 1361) images (Figure 1). Additionally, of the images in the dataset, 10% randomly selected were used to form a validation dataset. To prevent overfitting, this study applied a wide variety of image augmentation processes, namely, rescaling, rotation, height, and width shift, to the training dataset. The datasets could be viewed in the public website:Defacement dataset: https://drive.google.com/file/d/1EFYwA3GCD5gbWoQR4P_n7t6Z-haE0IjF/view?usp=sharing.Efflorescence dataset: https://drive.google.com/file/d/1l5eBPtT1HnBCGNLKawZUq8_tbjoLK-D4/view?usp=sharing.Cracking dataset: https://drive.google.com/file/d/1YzHglz4f6sKu-Pw8D2PBbHQNkJU3wuwT/view?usp=sharing.Spalling dataset: https://drive.google.com/file/d/1ktIzpu2u3fakRTEh5H_IF1FSoKMxio1f/view?usp=sharing.

3.2. Method for Automated Defect Detection

This study used a modified model as the feature extractor (Figure 2) and applied fine-tuned transfer learning to an ImageNet-pretrained VGG-16 network [62]. The mentioned transfer learning is to first conduct training under big data to ensure that the deep learning network has the basic ability to recognize objects. Subsequently, the classification layers of the network are replaced with the required categories to make the network more robust.

This study used VGG-16 because it is powerful yet has simple architecture with relatively few layers. This architecture comprises five convolutional layer blocks with max pooling for feature extraction; next, three fully connected layers and one final 1 × 1000 Softmax layer come after the mentioned layer blocks. Moreover, in the CNN, the input comprises 224 × 224-pixel RGB images, and the first block consists of two convolutional layers with 32 filters, each size 3 × 3. The second, third, and fourth convolution blocks use filters of sizes 64 × 3 × 3, 128 × 3 × 3, and 256 × 3 × 3, respectively. This simple architecture eases model modification processes for transfer learning and CAM while preserving the model’s accuracy.

In the determination of hyperparameters, some of the default values are directly used and some of them are determined by training data testing and modifying. The default values of optimizer (as SGD), momentum (as 0.9), and weight decay (as 5e−4) are directly used without modifications [63]. The range of 1r is from 0.001 to 0.01, and the convergency efficiency is better on 0.01 after testing. Although there are many loss functions, the cross-entropy loss method is used owing to the research objective of basic classification. Batch size is usually justified by the multiples of 2, and 25 is determined by the system performance. To fine-tune the VGG-16 model, the initial four convolutional layer blocks were first used as the generic feature extractor, and then, the final 1 × 1000 Softmax layer was replaced with a 1 × 4 classifier (for efflorescence, spalling, cracking, and defacement). Finally, the newly modified model was retrained to enable only the weights of the fifth convolutional block to update during training.

3.3. CAM-Based Object Localization

Problems in object localization differ from those in image classification. Algorithms can determine the class of image features or objects and detect and label the objects within the image usually by placing a rectangular bounding box, indicating the algorithm’s confidence of existence [64]. Moreover, for a detected object, a neural network provides four numbers as the output; these numbers function to parameterize the aforementioned bounding box.

For the identification of discriminative regions in the image, CAM can be combined with classification-trained CNNs. In CAM, the height of image regions, which are relevant to a specific class, is determined by reusing CNN classifier layers so as to obtain optimal localization results. In this study, the application of CAM to the current study model increased the accuracy of image localization.

4. Results

4.1. External Wall Tile Prediction

Figures 3(a) and 3(b) illustrate the loss and learning curves derived for our model for the training dataset. Epoch, presented on the horizontal axis in both curves, represents the training cycle for in which the entire dataset was entered into the network. Therefore, when the loss curve presents a lower value, the probability of image recognition error is low, but when the learning curve presents a value close to 1.0, the model training accuracy is high. As indicated in Figure 3(a), at around the 50th cycle, the loss curve reached stable convergence to achieve good image recognition. As presented in Figure 3(b), model training remained in a good state.

The training dataset included 5680 images, and the training involved 500 cycles. As shown in Figure 3, our model was well trained. Moreover, the accuracy for the optimal training dataset was 86%, with a final loss of 0.0576 at the end of the 500th cycle of training; nevertheless, no model overfitting was identified during training. As presented in Table 1, the model’s accuracy rates for efflorescence, cracking, and defacement were 91%, 86%, and 98%, respectively, but that for spalling was only 76%.


PrecisionRecallSupport

Efflorescence0.910.8050
Spalling0.761.0050
Crack0.860.8650
Defacement0.980.7850

4.2. Defect Localization Using CAM

To further analyze the reasons for the fact that the accuracy rate for spalling was low, we visualized the dataset by applying CAM, a low-cost computation method. In the resulting image (Figure 4), large network responses, indicated in red, were noted. Figure 4 shows the focus of the various artificial neural networks.

Next, a confusion matrix (Table 2) indicated that most of the damage cases in the defacement, efflorescence, and cracking images in the test datasets were classified as spalling, indicating that spalling images in the training dataset may have been exhibiting the characteristics of defacement, efflorescence, and cracking. Thus, the training dataset’s images may have been defective. This study thus re-examined the 1386 spalling images in this dataset and found that 94.44% and 5.56% of these images presented mosaic tiles and lath bricks, respectively.


Prediction

ActualitySpallingDefacementEfflorescenceCrack
Spalling50000
Defacement63905
Efflorescence80402
Crack21443

Because of the small unit area of mosaic tiles, as these tiles fell, they left dirty, black stains behind. Moreover, during the process of capturing images of sample areas, trees may have blocked the light and created shadows (Figure 5, red circles). Thus, during model training, the model misclassified various images of defacement as those of spalling (Figure 6). Similarly, lighting problems during image capture were the reasons for the misclassification of efflorescence as spalling (Figure 7, red circles). Thus, when sunlight was too bright or when the spalling pattern was irregular, the model misclassified efflorescence as spalling during model training (Figure 8). Finally, some cracking was also misclassified as spalling during model training (Figures 9 and 10, red circles).

5. Conclusions

In this study, this study combined a UAV with a deep learning model for automated detection of external wall tile deterioration of buildings and made modifications to improve the efficiency of our method. The results indicated that our model had high accuracy and recall, the respective rates of which were 91% and 80% for efflorescence, 76% and 100% for spalling, 86% and 86% for cracking, and 98% and 78% for defacement (Table 1).

Compared with traditional detection methods, the use of UAVs is inexpensive and affords higher mobility, efficiency, and safety. However, UAV efficiency can be affected by the climate, lighting, wind, and blind spots in the test area and by the limitation of UAV operational technology. In the future, these limitations may be overcome through the use of relatively robust camera lenses, sensors, systems, and automation technologies, making UAVs safer and more efficient and increasing their application in the field of construction.

In the current study, the recognition accuracy for spalling was slightly low, indicating some limitations in spalling recognition from the existing images. Therefore, in future studies, the use of infrared scanners, which detect differences in depth and recognize whether tiles have fallen, is highly recommended to improve recognition accuracy. Besides using larger data, a deeper network can be also considered. Deeper network can identify more detailed characteristics to improve the accuracy. Moreover, in the aspect of simultaneously identifying multiple defect types, different tags can be given in the image and use the corresponding loss functions. In the aspect of normal photos (without deterioration), the normal photo would be also given relatively lower belonging probabilities to the four deterioration types. Two methods are considered to further improve the model adaptation: (1) to set a basic threshold in the model; that is, if the input photos are lower than the threshold, they are classified as background (not belonging to the four types of deterioration); and (2) to take photos of normal exterior wall tiles equivalent to the number of single-deteriorated photos as the background type (the fifth type) and then retrain the model.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Y. Zhang, N. Anderson, S. Bland, S. Nutt, G. Jursich, and S. Joshi, “All-printed strain sensors: building blocks of the aircraft structural health monitoring system,” Sensors and Actuators A: Physical, vol. 253, pp. 165–172, 2017. View at: Publisher Site | Google Scholar
  2. Q. Kong, R. M. Allen, M. D. Kohler, T. H. Heaton, and J. Bunn, “Structural health monitoring of buildings using smartphone sensors,” Seismological Research Letters, vol. 89, no. 2, pp. 594–602, 2018. View at: Publisher Site | Google Scholar
  3. D. B. Agyemang and M. Bader, “Surface crack detection using hierarchal convolutional neural network,” Advances in Intelligent Systems and Computing, pp. 173–186, 2019. View at: Publisher Site | Google Scholar
  4. X. Zhao, S. Li, H. Su, L. Zhou, and K. J. Loh, “Image-based comprehensive maintenance and inspection method for bridges using deep learning,” Smart Materials, Adaptive Structures and Intelligent Systems, vol. 2, V002T05A017 pages, 2018. View at: Publisher Site | Google Scholar
  5. C. Modarres, N. Astorga, E. L. Droguett, and V. Meruane, “Convolutional neural networks for automated damage recognition and damage type identification,” Structural Control and Health Monitoring, vol. 25, no. 10, Article ID e2230, 2018. View at: Publisher Site | Google Scholar
  6. A. Doulamis, N. Doulamis, E. Protopapadakis, and A. Voulodimos, “Combined convolutional neural networks and fuzzy spectral clustering for real time crack detection in tunnels,” in Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 4153–4157, IEEE, Athens, Greece, October 2018. View at: Publisher Site | Google Scholar
  7. S. M. Huang, L. W. Chiang, T. T. Li, and Y. E. Lai, “Research on the common consensus of building health diagnosis and regular check based on building owner viewpoint,” Journal of Architecture, vol. 71, pp. 233–253, 2010. View at: Publisher Site | Google Scholar
  8. G. Song, C. Wang, and B. Wang, “Structural health monitoring (SHM) of civil structures,” Applied Sciences, vol. 7, no. 8, p. 789, 2017. View at: Publisher Site | Google Scholar
  9. H. Perez, J. H. M. Tah, and A. Mosavi, “Deep learning for detecting building defects using convolutional neural networks,” Sensors, vol. 19, no. 16, p. 3556, 2019. View at: Publisher Site | Google Scholar
  10. J. A. Larbi, “Microscopy applied to the diagnosis of the deterioration of brick masonry,” Construction and Building Materials, vol. 18, no. 5, pp. 299–307, 2004. View at: Publisher Site | Google Scholar
  11. P. B. Lourenço, E. Luso, and M. G. Almeida, “Defects and moisture problems in buildings from historical city centres: a case study in Portugal,” Building and Environment, vol. 41, no. 2, pp. 223–234, 2006. View at: Publisher Site | Google Scholar
  12. C. Y. Yiu, D. C. W. Ho, and S. M. Lo, “Weathering effects on external wall tiling systems,” Construction and Building Materials, vol. 21, no. 3, pp. 594–600, 2007. View at: Publisher Site | Google Scholar
  13. X. Chen and J. Wu, “Accessible design of the kitchen table for wheelchair users,” Applied Mechanics and Materials, vol. 642, pp. 1105–1108, 2014. View at: Publisher Site | Google Scholar
  14. N. M. M. Ramos, E. Barreira, M. L. Simões, and J. M. P. Q. Delgado, “Probabilistic risk assessment methodology of exterior surfaces defacement caused by algae growth,” Journal of Construction Engineering and Management, vol. 140, no. 12, Article ID 05014012, 2014. View at: Publisher Site | Google Scholar
  15. T. Yamaguchi and S. Hashimoto, “Fast crack detection method for large-size concrete surface images using percolation-based image processing,” Machine Vision and Applications, vol. 21, no. 5, pp. 797–809, 2010. View at: Publisher Site | Google Scholar
  16. K. Krzemien and I. Hager, “Assessment of concrete susceptibility to fire spalling: a report on the state-of-the-art in testing procedures,” Procedia Engineering, vol. 108, pp. 285–292, 2015. View at: Publisher Site | Google Scholar
  17. L.-W. Chiang, S.-J. Guo, C.-Y. Chang, and T.-P. Lo, “The development of a diagnostic model for the deterioration of external wall tiles of aged buildings in Taiwan,” Journal of Asian Architecture and Building Engineering, vol. 15, no. 1, pp. 111–118, 2016. View at: Publisher Site | Google Scholar
  18. F. L. Monte, R. Felicetti, A. Meda, and A. Bortolussi, “Assessment of concrete sensitivity to fire spalling: a multi-scale experimental approach,” Construction and Building Materials, vol. 212, pp. 476–485, 2019. View at: Publisher Site | Google Scholar
  19. B. Wang, L. Han, H. Zhang, Q. Wang, and B. Li, “A flying robotic system for power line corridor inspection,” in Proceedings of the 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 2468–2473, Guilin, China, December 2009. View at: Publisher Site | Google Scholar
  20. C. Eschmann, C.-M. Kuo, C.-H. Kuo, and C. Boller, “High-resolution multisensor infrastructure inspection with unmanned aircraft systems,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL-1/W2, pp. 125–129, 2013. View at: Publisher Site | Google Scholar
  21. A. Banaszek, A. Zarnowski, A. Cellmer, and S. Banaszek, “April). Application of new technology data acquisition using aerial (UAV) digital images for the needs of urban revitalization,” in Proceedings of the 10th International Conference on Environmental Engineering, ICEE, Vilnius, Lithuania, April 2017. View at: Publisher Site | Google Scholar
  22. C. Zhang and A. Elaksher, “An unmanned aerial vehicle-based imaging system for 3D measurement of unpaved road surface Distresses1,” Computer-Aided Civil and Infrastructure Engineering, vol. 27, no. 2, pp. 118–129, 2012. View at: Publisher Site | Google Scholar
  23. K. S. Pratt, R. R. Murphy, J. L. Burke, J. Craighead, C. Griffin, and S. Stover, “Use of tethered small unmanned aerial system at berkman plaza II collapse,” in Proceedings of the 2008 IEEE International Workshop on Safety, Security and Rescue Robotics, pp. 134–139, Sendai, Japan, October 2008. View at: Publisher Site | Google Scholar
  24. N. Michael, S. Shen, K. Mohta et al., “Collaborative mapping of an earthquake damaged building via ground and aerial robots,” Springer Tracts in Advanced Robotics, pp. 33–47, 2014. View at: Publisher Site | Google Scholar
  25. T. Yamamoto, H. Kusumoto, and K. Banjo, “Data Collection System for a rapid recovery work: using digital photogrammetry and a small unmanned aerial vehicle (UAV),” in Proceedings of the Computing in Civil and Building Engineering (2014), pp. 875–882, Orlando, Florida, June 2014. View at: Publisher Site | Google Scholar
  26. M. M. Torok, M. Golparvar-Fard, and K. B. Kochersberger, “Image-based automated 3D crack detection for post-disaster building assessment,” Journal of Computing in Civil Engineering, vol. 28, no. 5, Article ID A4014004, 2014. View at: Publisher Site | Google Scholar
  27. F. Neitzel and J. Klonowski, “Mobile 3D mapping with a low-cost UAV system,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 38, pp. 1–6, 2011. View at: Publisher Site | Google Scholar
  28. S. Siebert and J. Teizer, “Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system,” Automation in Construction, vol. 41, pp. 1–14, 2014. View at: Publisher Site | Google Scholar
  29. S. Bang, H. Kim, and H. Kim, “Vision-based 2D map generation for monitoring construction sites using UAV Videos,” in In ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction, Taipei, Taiwan, June 2017. View at: Publisher Site | Google Scholar
  30. J. Irizarry and D. B. Costa, “Exploratory study of potential applications of unmanned aerial systems for construction management tasks,” Journal of Management in Engineering, vol. 32, no. 3, Article ID 05016001, 2016. View at: Publisher Site | Google Scholar
  31. B. Richman, M. P. Bauer, B. J. Michini, and A. J. Poole, U.S. Patent No. 9, U.S. Patent and Trademark Office, Washington, DC, 2017. View at: Publisher Site
  32. H. Freimuth, J. Müller, and M. König, “Simulating and executing UAV-assisted inspections on construction sites,” in Proceedings of the 34th International Symposium on Automation and Robotics in Construction (ISARC 2017), Taipei, Taiwan, June 2017. View at: Publisher Site | Google Scholar
  33. J. J. Lin, K. K. Han, and M. Golparvar-Fard, “A framework for model-driven acquisition and analytics of visual data using UAVs for automated construction progress monitoring,” Computing in Civil Engineering 2015, pp. 156–164, 2015. View at: Publisher Site | Google Scholar
  34. S. Lavy, J. Irizarry, M. Gheisari, G. Williams, and K. Roper, 2014, Ambient Intelligence Environments for Accessing Building Information. Facilities.
  35. C. Wefelscheid, R. Hänsch, and O. Hellwich, “Three-dimensional building reconstruction using images obtained by unmanned aerial vehicles,” International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 38, no. 1, 2011. View at: Publisher Site | Google Scholar
  36. X. Feifei, L. Zongjian, G. Dezhu, and L. Hua, “Study on construction of 3D building based on UAV images,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 39, p. B1, 2012. View at: Publisher Site | Google Scholar
  37. J. Langhammer, B. Janský, J. Kocum, and R. Minařík, “3-D reconstruction of an abandoned montane reservoir using UAV photogrammetry, aerial LiDAR and field survey,” Applied Geography, vol. 98, pp. 9–21, 2018. View at: Publisher Site | Google Scholar
  38. S. Sun and B. Wang, “Low-altitude UAV 3D modeling technology in the application of ancient buildings protection situation assessment,” Energy Procedia, vol. 153, pp. 320–324, 2018. View at: Publisher Site | Google Scholar
  39. H. Tomita, T. Takabatake, S. Sakamoto, H. Arisumi, S. Kato, and Y. Ohgusu, “Development of UAV indoor flight technology for building equipment works,” in Proceedings of the ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction, IAARC Publications, Taipei, Taiwan, July 2017. View at: Publisher Site | Google Scholar
  40. L. Johnson, S. Ponda, H. L. Choi, and J. How, “Improving the efficiency of a decentralized tasking algorithm for UAV teams with asynchronous communications,” in Proceedings of the AIAA Guidance, Navigation, and Control Conference, p. 8421, Monterey, California, August 2010. View at: Publisher Site | Google Scholar
  41. M. A. Goodrich, B. S. Morse, C. Engh, J. L. Cooper, and J. A. Adams, “Towards using unmanned aerial vehicles (UAVs) in wilderness search and rescue,” Interaction Studies, vol. 10, no. 3, pp. 453–478, 2009. View at: Publisher Site | Google Scholar
  42. A. Colefax, 2020, Developing the use of drones for non-destructive shark management and beach safety.
  43. J. K. Lee, J. O. Kim, and S. J. Park, “A study on the UAV image-based efficiency improvement of bridge maintenance and inspection,” Journal of Intelligent & Fuzzy Systems, vol. 36, no. 2, pp. 967–983, 2019. View at: Publisher Site | Google Scholar
  44. C.-Y. Chang, S.-S. Hung, L.-H. Liu, and C.-P. Lin, “Innovative strain sensing for detection of exterior wall tile lesion: smart skin sensory system,” Materials, vol. 11, no. 12, p. 2432, 2018. View at: Publisher Site | Google Scholar
  45. B. L. Luk, K. P. Liu, and F. Tong, “Rapid evaluation of tile-wall bonding integrity using multiple-head impact acoustic method,” NDT & E International, vol. 44, no. 3, pp. 297–304, 2011. View at: Publisher Site | Google Scholar
  46. C. Y. Chang, Y. C. Yi, S. S. Hung, Y. S. Lee, and Y. C. Chang, “Applying strain-sensing technology for monitoring and diagnosing peel-based deterioration of tiled exterior walls,” International Journal of Civil, Structural, Environmental and Infrastructure Engineering Research and Development, vol. 7, pp. 39–48, 2017. View at: Google Scholar
  47. M. Behm, “Linking construction fatalities to the design for construction safety concept,” Safety Science, vol. 43, no. 8, pp. 589–611, 2005. View at: Publisher Site | Google Scholar
  48. S. Chi and S. Han, “Analyses of systems theory for construction accident prevention with specific reference to OSHA accident reports,” International Journal of Project Management, vol. 31, no. 7, pp. 1027–1041, 2013. View at: Publisher Site | Google Scholar
  49. Y. M. Chen, L. Dong, and J.-S. Oh, “Real-time video relay for UAV traffic surveillance systems through available communication networks,” in Proceedings of the 2007 IEEE Wireless Communications and Networking Conference, pp. 2608–2612, Hong Kong, China, March 2007. View at: Publisher Site | Google Scholar
  50. K. Kanistras, G. Martins, M. J. Rutherford, and K. P. Valavanis, “May). A survey of unmanned aerial vehicles (UAVs) for traffic monitoring,” in Proceedings of the 2013 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 221–234, IEEE, Atlanta, Georgia, USA, May 2013. View at: Publisher Site | Google Scholar
  51. N. Mohamed, J. Al-Jaroodi, I. Jawhar, A. Idries, and F. Mohammed, “Unmanned aerial vehicles applications in future smart cities,” Technological Forecasting and Social Change, vol. 153, Article ID 119293, 2020. View at: Publisher Site | Google Scholar
  52. F. Mohammed, A. Idries, N. Mohamed, J. Al-Jaroodi, and I. Jawhar, “UAVs for smart cities: opportunities and challenges,” in Proceedings of the 2014 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 267–273, Orlando, Florida, USA, May 2014. View at: Publisher Site | Google Scholar
  53. X. Pan and T. Y. Yang, “Postdisaster image‐based damage detection and repair cost estimation of reinforced concrete buildings using dual convolutional neural networks,” Computer-Aided Civil and Infrastructure Engineering, vol. 35, no. 5, pp. 495–510, 2020. View at: Publisher Site | Google Scholar
  54. M. S. Arman, M. M. Hasan, F. Sadia, A. K. Shakir, K. Sarker, and F. A. Himu, “Detection and classification of road damage using R-CNN and faster R-CNN: a deep learning approach,” in Proceedings of the International Conference on Cyber Security and Computer Science, pp. 730–741, Dhaka, Bangladesh, February 2020. View at: Publisher Site | Google Scholar
  55. Y.-J. Cha, W. Choi, and O. Büyüköztürk, “Deep learning-based crack damage detection using convolutional neural networks,” Computer-Aided Civil and Infrastructure Engineering, vol. 32, no. 5, 2018. View at: Publisher Site | Google Scholar
  56. S. Dorafshan, R. J. Thomas, and M. Maguire, “Comparison of deep convolutional neural networks and edge detectors for image-based crack detection in concrete,” Construction and Building Materials, vol. 186, pp. 1031–1045, 2018. View at: Publisher Site | Google Scholar
  57. Y. Ren, J. Huang, Z. Hong et al., “Image-based concrete crack detection in tunnels using deep fully convolutional networks,” Construction and Building Materials, vol. 234, Article ID 117367, 2020. View at: Publisher Site | Google Scholar
  58. D. Wang, Y. Zhang, Y. Pan, B. Peng, H. Liu, and R. Ma, “An automated inspection method for the steel box girder bottom of long-span bridges based on deep learning,” IEEE Access, vol. 8, pp. 94010–94023, 2020. View at: Publisher Site | Google Scholar
  59. D. Gonzalez, D. Rueda-Plata, A. B. Acevedo et al., “Automatic detection of building typology using deep learning methods on street level images,” Building and Environment, vol. 177, Article ID 106805, 2020. View at: Publisher Site | Google Scholar
  60. X. Hou, Y. Zeng, and J. Xue, “Detecting structural components of building engineering based on deep-learning method,” Journal of Construction Engineering and Management, vol. 146, no. 2, Article ID 04019097, 2020. View at: Publisher Site | Google Scholar
  61. Y.-J. Cha, W. Choi, and O. Büyüköztürk, “Deep learning-based crack damage detection using convolutional neural networks,” Computer-Aided Civil and Infrastructure Engineering, vol. 32, no. 5, pp. 361–378, 2017. View at: Publisher Site | Google Scholar
  62. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  63. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014, https://arxiv.org/abs/1409.1556. View at: Google Scholar
  64. Z.-Q. Zhao, P. Zheng, S.-T. Xu, and X. Wu, “Object detection with deep learning: a review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212–3232, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Ren-Yi Kung et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views902
Downloads854
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.