Advances in Civil Engineering

Advances in Civil Engineering / 2018 / Article
Special Issue

Natural Hazards Challenges to Civil Engineering

View this Special Issue

Research Article | Open Access

Volume 2018 |Article ID 7258156 |

P. Shane Crawford, Mohammad A. Al-Zarrad, Andrew J. Graettinger, Alexander M. Hainen, Edward Back, Lawrence Powell, "Rapid Disaster Data Dissemination and Vulnerability Assessment through Synthesis of a Web-Based Extreme Event Viewer and Deep Learning", Advances in Civil Engineering, vol. 2018, Article ID 7258156, 13 pages, 2018.

Rapid Disaster Data Dissemination and Vulnerability Assessment through Synthesis of a Web-Based Extreme Event Viewer and Deep Learning

Academic Editor: André Barbosa
Received22 Jul 2018
Revised16 Oct 2018
Accepted17 Oct 2018
Published13 Nov 2018


Infrastructure vulnerability has drawn significant attention in recent years, partly because of the occurrence of low-probability and high-consequence disruptive events such as 2017 hurricanes Harvey, Irma, and Maria, 2011 Tuscaloosa and Joplin tornadoes, and 2015 Gorkha, Nepal, and 2017 Central Mexico earthquakes. Civil infrastructure systems support social welfare, thus viability and sustained operation is critical. A variety of frameworks, models, and tools exist for advancing infrastructure vulnerability research. Nevertheless, providing accurate vulnerability measurement remains challenging. This paper presents a state-of-the-art data collection and information extraction methodology to document infrastructure at high granularity to assess preevent vulnerability and postevent damage in the face of disasters. The methods establish a baseline of preevent infrastructure functionality that can be used to measure impacts and temporal recovery following a disaster. The Extreme Events Web Viewer (EEWV) presented as part of the methodology is a GIS-based web repository storing spatial and temporal data describing communities before and after disasters and facilitating data analysis techniques. This web platform can store multiple geolocated data formats including photographs and 360° videos. A tool for automated extraction of photography from 360° video data at locations of interest specified in the EEWV was created to streamline data utility. The extracted imagery provides a manageable data set to efficiently document characteristics of the built and natural environment. The methodology was tested to locate buildings vulnerable to flood and storm surge on Dauphin Island, Alabama. Approximately 1,950 buildings were passively documented with vehicle-mounted 360° video. Extracted building images were used to train a deep learning neural network to predict whether a building was elevated or nonelevated. The model was validated, and methods for iterative neural network training are described. The methodology, from rapidly collecting large passive datasets, storing the data in an open repository, extracting manageable datasets, and obtaining information from data through deep learning, will facilitate vulnerability and postdisaster analyses as well as longitudinal recovery measurement.

1. Introduction

Capturing pre- and postevent disaster data is critical in furthering the measurement science for disaster resilience as well as assessing vulnerable aspects of the built environment. Preevent data provide the fundamental ground-truth of the built and natural environment before a disaster strikes which is used as a baseline to measure impact and performance following an event. Additionally, these data are used to assess vulnerable areas of a community and direct mitigation strategies in these areas, which can lessen the effects of extreme events. To adequately assess the built and natural environment, data must be collected in granular spatial and temporal formats. Although significant progress has been made in the application of science and technology to reduce disaster effects, there are still many challenges related to preparation, response, and recovery. In 2004, the National Science Board recognized the need for a cyber-integrated scientific research platform for collection, transfer, mining, and storage of perishable data [1]. In 2008, The National Science and Technology Council (NSTC) Subcommittee on Disaster Reduction put forth a list of grand challenges for disaster reduction. The first of these challenges is to provide hazard and disaster information where and when that information is needed. The subcommittee also recommended that disaster information should be provided through a real-time mechanism of data collection and interpretation that is readily available and usable by all scientists instead of a select few [2]. Collecting data and information are critical in addressing additional grand challenges that include developing hazard mitigation strategies and technologies, reducing the vulnerability of infrastructure, and assessing disaster resilience [2].

Currently, data collected by disaster reconnaissance teams for a particular study are not typically integrated, distributed, or stored and made available for use by others in the scientific community; they are used primarily in isolation by separate research teams and may be published in a limited capacity with the results of investigations carried out with the data. Current strategies have advanced research in postdisaster studies for earthquakes [36], tornadoes [711], and hurricanes [1215]. Reuse of the collected information is typically in the form of the meta-analysis of literature and not the collected data. Advances in data collection and storage technologies offer the ability to meet these needs with modern sensors, platforms, and data collection techniques. However, emerging data collection equipment requires standards, protocols, and workflows, and data storage technology and software require data management strategies to be effectively implemented to further research goals. Big data issues often accompany the new technology and make management and extraction of salient information difficult [16]. The processes of information extraction from these datasets are critical for researchers to further measurement science. Additionally, long-term storage and dissemination of data across organizational platforms and stakeholders is needed to break down siloed research fields and benefit public understanding of disasters and mitigation strategies. A methodology of disaster data collection is needed, from developing data collection standards and protocols to storing and disseminating the collected data across organizational boundaries. Extracting information from collected data will enhance measurement science and fuel fundamental research for community vulnerability as well as other research areas where ground-truth information of the natural and manmade environment promotes research.

The methodology presented consists of an approach to collecting highly granular spatial and temporal data and storing the collected data in a web-based repository. Figure 1 presents the workflow. Large image datasets are collected using vehicle-mounted 360° video. The web-based repository, called the Extreme Events Web Viewer (EEWV), stores geolocated data in a temporal format. It is an open platform where data can be uploaded in the location collected and provided with label data, for example vulnerability metrics for community infrastructure. The EEWV provides tools to extract manageable image datasets from large video files which can be efficiently analyzed. Implementing deep learning approaches to combine labelled imagery with a pretrained model architecture allows new models to be derived to classify unlabelled images. The approach can be useful in disaster and failure scenarios as well as recovery studies. The methodology is meant to facilitate development of hazard mitigation strategies to reduce vulnerability to infrastructure and assess disaster resilience. A case study of flood vulnerability on Dauphin Island in Alabama, a barrier island in the Gulf of Mexico that has experienced major damage from past hurricanes, is presented to illustrate the methodology. Flood vulnerability analyses, such as the case study presented for Dauphin Island, can aid in disaster and mitigation planning for these communities. Federal money has been spent to purchase homes in flood-prone areas [17] and establish the National Flood Insurance Program [18] along with various other mitigation techniques. A federal report released in 2005 showed that every $1 spent on disaster mitigation saved $4 in future costs of disasters [19]. Effectively prioritizing mitigation strategies requires knowledge of where the vulnerabilities in communities exist. Multiple vulnerability models, such as the Disaster Resilience of Place model [20], have been created to provide a framework for vulnerability assessment. The methods described here can quickly provide the fundamental information needed to drive these mitigation plans and vulnerability assessments in a format which is easily accessible to all community stakeholders.

2. Methodology

2.1. Extreme Events Web Viewer

The Extreme Events Web Viewer (EEWV) has been created at The University of Alabama to facilitate data storage, dissemination, and analysis of community and extreme event data. The EEWV is a web-based Internet clearinghouse using geographic information systems (GIS) tools to facilitate fundamental research and disseminate data to a broad user base. Data describing a community before or after an event is uploaded and displayed geographically and provided with attribute data to facilitate geospatial analyses. Survey locations storing photographs, PDFs, audio files, or other data types, are added to the EEWV in a spatial and temporal format and symbolically display attribute metadata. Building locations are marked separate from survey locations and can be used for data integration and analysis. Integration includes consolidating all survey location data describing a single building. One analysis technique, which will be presented as part of the methodology detailed here, extracts images from passive, vehicle-mounted 360° video data. The EEWV is set up to accept new analysis tools to broaden the resources available to researchers. Figure 2(a) shows the data types stored in the EEWV, including survey locations shown in magenta (when highlighted, survey locations change to cyan, as shown in the figure), building locations shown in blue, and video paths depicted as green lines. When a survey location is selected, a data view window, shown in Figure 2(b), opens to display data stored in the survey location.

Postdisaster reconnaissance has typically been conducted by groups of researchers traversing a damaged area on foot and collecting digital photography, video, or human subject surveys, describing infrastructure performance, social impacts, or other effects of the studied disaster. This approach works well for studies requiring close-range inspection with photography to highlight specific details, such as determining component failure mechanisms, but this method requires large investments of personnel and time if data describing a large area are required. This often leads to expensive reconnaissance trips or studies focused on a small portion of the affected area. Additionally, information describing the preevent functionality of the community is often not prioritized, even in high-risk areas such as coastal communities that are vulnerable to hurricanes, flooding, and tornado and Dixie Alleys, which have high vulnerability to tornadoes. Many communities in seismic regions track infrastructure with seismic design or retrofits. Recently, researchers have explored using drone videos to capture larger areas in smaller amounts of time [21]. This method is beneficial, but drone platforms suffer from short battery life, legal requirements such as the need for pilots to maintain line-of-sight with the platform, airspace permission, and pushback from community members due to privacy concerns. An approach for rapid, passive, and large-volume data collection using vehicle-mounted 360° cameras with onboard Global Positioning System (GPS) sensors has been created to meet the needs of researchers following an event as well as before a disaster occurs.

The EEWV facilitates visualization of collected videos using GPS latitude, longitude, and timestamp values. Figure 3(a) shows GPS location points collected using a vehicle-mounted camera. GPS point locations can vary from up to subfoot accuracy in the costliest units to lower accuracy depending on the model used [22, 23]. Many modern cameras are GPS-integrated, but external GPS sensors can be paired to increase the positioning accuracy when GPS-integrated camera accuracy is not sufficient for analysis. GPS location points typically vacillate around the true driven path with varying error based on the GPS used (GPS locations were measured up to nearly 10 meters from true locations in Figure 3), but large errors can be encountered when collecting video in the presence of disrupting infrastructure such as large buildings or trees, highway overpasses, etc. To increase the positioning accuracy in this study, location values were snapped to roadway lines obtained from the publicly available United States Census Bureau Topologically Integrated Geographic Encoding and Referencing (TIGER) data repository, as shown in Figure 3(b). Snapping moved points closer to the exact location where GPS points were taken in this study, as evidenced in Figure 3. This procedure may not be necessary in situations where highly accurate GPS is used. In this study, photo extraction was more accurate when GPS locations were snapped. A video path polyline was created by connecting snapped GPS points with sequential timestamps, which captured the correct trajectory even when multiple passes are conducted on a roadway. The video path line typically follows TIGER roadway line geometry exactly, but in some cases inaccuracies exist due to the snapping procedure. Due to location inaccuracy, GPS points were snapped to incorrect roadway lines in some cases. This occurred when a GPS point was located closer to an incorrect roadway line than the roadway where the point was collected. In most cases, this occurred at roadway intersections and where dual carriageways were represented in the TIGER data with single carriageways. GPS locations at intersections were rarely used for image extraction, due to images typically being extracted only at the nearest GPS location to a building, and the error at dual carriageways was reduced because buildings are typically located further from these roadway types than single carriageways.

A potential source of error in the geolocation process used here occurs where TIGER linework does not match the actual roadways, for example if a new road has been constructed and TIGER roadway lines have not been updated to accommodate the change. Leveraging alternative open-source roadway line repositories, such as OpenStreetMap [24], may reduce the error stemming from this effect. Additionally, many communities maintain highly accurate roadway lines of their own. These should be used when available as they typically constitute the most accurate account of roadway location in that community. Additionally, inaccurate data can be manually corrected using GIS software packages.

It should be noted that these approaches have been tested and calibrated for suburban and moderate-density urban environments. High-density urban environments provide their own difficulty in locating and extracting data with high accuracy, and more extensive testing in these regions is needed. Furthermore, due to the highly variable nature of global infrastructure, especially in postdisaster scenarios when roadways may not be accessible or viable, a single geolocation methodology may not be applicable in all contexts, but the current work is a substantial contribution to current data collection methods.

The geolocation approach enhances visualization of collected videos in the EEWV by updating vehicle location on a map while a video is playing. Figure 4 shows the vehicle-mounted 360° video display approach with the map and vehicle location in Figure 4(a). A video player opens in the EEWV when a video path is selected, as shown in Figure 4(b). The video player includes pan and zoom features which allow users to manually inspect collected videos to analyze the environment captured in the video. The video GPS locations are initially invisible to the user. Each GPS location includes a timestamp which is synced to the video time. When the video time matches the timestamp of a GPS location, the GPS location is selected and becomes visible as a cyan point (similar to survey locations which highlight cyan when selected) on the video path to show the location of the vehicle. The field of view corresponding to the selected video is shown in Figure 4(a) as a yellow triangle. Survey and building locations can be added in the EEWV, either manually or through upload of GIS shapefiles, to store information gained from videos.

The vehicle-mounted camera approach allows researchers to rapidly capture large passive datasets describing buildings, distribution networks, and other infrastructure. This approach creates a large volume of remotely sensed data that require a large time investment to manually analyze. Data can be uploaded from the field to the EEWV when Internet connection is available, although in postevent scenarios data should be managed locally when the state of the impacted community prevents large data transmission. While the open format of the EEWV allows multiple researchers to conduct an assessment simultaneously, which would make the data mining process more efficient, automated data extraction tools significantly improve assessment speed. The Extreme Events Video Capture (EEVC) tool allows users to automatically extract images from the 360° video at locations of interest.

2.2. Extreme Events Video Capture Tool

A video display window for the EEVC tool was created using Unity, a graphic development platform, and provides an interface to interact with 360° video geometry and extract images at specified video geometric orientations. The tool can function manually but is intended to interact with the EEWV to automate image extraction and upload processes. Figure 5 shows the video display window of the EEVC tool. The video time is displayed, and buttons allow the user to pause, slow the video down to quarter or half speed, or speed up to two or five times normal playback speed. Users can pan horizontally and vertically, and the orientation of the video geometry along both axes is displayed. Screenshots of the displayed window can be extracted and saved as Portable Network Graphics (PNG) image files.

A custom tool was created to automate extraction of images from 360° videos by calling the video player and providing parameters which allow the tool to interface with the EEWV. The parameters include a video path identifier, a maximum distance boundary, a camera direction relative to the direction of travel, and a processing method for photo extraction. Figure 6 provides a graphical illustration of the image extraction process. The video path identifier corresponds to the intended video path stored in the EEWV, for the example provided in Figure 6, Video Path 1 would be used as the input parameter. The maximum distance boundary is the distance from the camera to the furthest point of interest in the video, measured in meters. This parameter defines a boundary around the video path that specifies only building points visible in the video specified by the video path identified. The maximum distance boundary is shown in red in Figure 6, where one side of the boundary is shown. The boundary also extends the same distance on the opposite side of the video path, though is not shown. Latitude and longitude values for building locations within the maximum distance boundary are used to calculate rotation angles between vehicle direction of travel and building point locations. The camera orientation relative to direction of travel parameter is used to calibrate the direction of travel to 0° in the horizontal plane of the video. Many commercial 360° cameras do not specify the orientation on the device, therefore the camera can be mounted to the car at various angles (e.g., with 0° pointed backwards, etc.). This parameter is optional; technicians are typically trained before data collection, but errors in camera mounting are possible. To illustrate this scenario, the camera direction relative to direction of travel in Figure 6 is illustrated as the variable. The processing method parameter allows users to extract images from the closest vehicle GPS location (typically located at a 90° between the direction of travel and building location), or at specified angles provided as a list. Providing a list of angles allows images to be extracted showing different sides of the point of interest, for example, if images of the front façade and both side walls of Building Location Z in Figure 6 are desired, the user could input 60°, 90°, and 120° to extract images from GPS locations A, B, and C, respectively. Specifying angles is also useful in scenarios where occlusions occur in the image extracted at the closest GPS location. The process shown in Figure 6 would iterate through all building locations within the maximum distance boundary specified for the identified video path to extract images at all angles specified. Once the process has completed, survey locations are created at a location half of the distance between the building location and the GPS location used for the photo extraction. If multiple angles are specified in the processing method parameter, multiple survey locations will be created for each building point. If the area is visited multiple times to document temporal change, survey locations will multiply and potentially become difficult to manage. To streamline data management, all survey locations extracted for a building location are automatically related to the building location. When the building location is selected, all images stored in related survey locations will be displayed.

This methodology allows researchers to rapidly collect data for large areas to document a community before a disaster occurs to benchmark infrastructure functionality or determine vulnerability just after a disaster to preserve perishable data describing infrastructure performance in a disaster scenario or at multiple times after a disaster to document recovery progress. The methodology was developed to capture large passive video datasets, parse the data into manageable image files, and provide the ability to store, visualize, and add value to the collected data through the EEWV. The image extraction technique can be applied to supplement other data collection equipment such as static 360° photography offered by the Natural Hazards Engineering Research Infrastructure RAPID Facility [25]. While these methods provide data where and when needed, meeting the grand challenge posed by the NSTC subcommittee, there arises a big data issue. In general, big data refers to high volume and high variety data that require cost-effective and advanced forms of information processing to enable better understanding of the data [16]. The EEVC tool extracts manageable datasets from collected videos, but for a researcher or community official to manually inspect these large volumes of data to obtain useful information, especially at multiple time intervals, a large time investment is required.

2.3. Deep Learning Approaches

Recent advances in machine and deep learning approaches have facilitated automated information extraction techniques from image data. To meet the needs of the research community as well as communities located in vulnerable areas, an approach has been developed to synthesize the data collection tools described earlier with a deep learning application. The use of deep learning can automate measurements in image data, thereby reducing big data issues created by production of large image datasets.

Deep learning, a branch of the wider machine learning research field, has shown recent advancements in image classification, in some cases producing results with higher accuracy than humans [26]. Deep learning models use neural networks to process and classify images. TensorFlow, a free and open-source software library facilitating machine learning frameworks has been developed by Google and is implemented to classify imagery stored in the EEWV. TensorFlow was created to support and implement machine learning models [27]. Image classification is conducted using the Inception model [28, 29]. The Inception model trains a convolutional neural network (CNN) to classify images, where each layer of the neural network classifies an aspect of the image using feature representation and similarity measurement (e.g., one layer may implement an edge detection algorithm, while another layer may group similar pixel color values). Features created in each layer of the CNN are general enough to account for image variation. Outputs from each layer are fed into subsequent layers, increasing abstraction throughout (e.g., edge detection layers feed into shape detection layers). The Inception model was originally trained on 100,000 images in 1,000 classification labels. To adopt the Inception model for new image classifications, transfer learning can be used to retrain the model using new imagery.

An application has been developed at the University of Alabama to facilitate transfer learning using the TensorFlow architecture and Inception model to train CNNs using images with new labels. The newly trained neural network can then be used to classify new imagery into the label classes specified in retraining. The output includes a label class prediction and a label class likelihood for each image. Classification accuracy is dependent on the number of images used in training as well as variation present in the training images. Common sources of image variation include resolution, light, and color. To create a robust classification model, training images should include an adequate amount of variation to account for the variation encountered in the images being classified by the model. Once classification predictions have been made, the information can be associated with building locations to provide a spatial perspective of the classified information.

2.4. Dauphin Island Case Study, Results, and Discussion

The data collection and deep learning approaches were applied to a case study on Dauphin Island, Alabama. Dauphin Island is a barrier island in the Gulf of Mexico located at the south end of Mobile County. The west side of the island is made up primarily of vacation and rental property, while the east end holds more permanent residents [30]. The west side is entirely beachfront while the east side contains maritime forests, as shown in Figure 7(a). The island is vulnerable to hurricanes that form in the Gulf of Mexico, with infrastructure damage due to wind and flooding as well as coastal erosion common aftereffects of these severe storms. Hurricane Camille in 1969 flooded over half of the island. Hurricane Frederic, which made landfall on Dauphin Island in 1979 as a Category 4 hurricane, swept away the viaduct connecting Dauphin Island to mainland Alabama. Recently, Hurricane Ivan caused nearly one-fourth of the island to be flooded by two feet of water, Hurricane Katrina damaged homes on the west side of the island with high winds and storm surge [31], and damage was also reported from Hurricane Nate in 2017. Hurricanes Ivan and Katrina destroyed over 300 homes on the island [30]. The island has decreased in size by an estimated 16% since 1958 due to coastal erosion [32]. Despite the island’s natural vulnerability and changing shorelines, construction has continued and residents tolerate the disruptions caused by these events [33].

Vehicle-mounted 360° video was collected covering nearly all roadways and all buildings on the island. Data collection procedures required less than one day. The EEVC tool played videos at normal playback speed, therefore the image extraction process equaled the video time. Image extraction was unsupervised; therefore no investment of researcher time was necessary. The small time investment relative to conventional approaches verifies the efficiency of the methodology and promotes temporal data collection at regular intervals to monitor change. Figure 7(b) shows the video paths, denoted by green lines, collected for the island. Figure 7(c) shows building locations on the island. In total, almost 2000 building locations were manually placed in the EEWV. Figure 7(d) shows survey locations created to store building images extracted from the 360° videos. Data were not collected in gated communities in the southeast of the island or in government-owned compounds on the east edge of the island. Over 2,150 images were collected at building locations in the EEWV, with multiple images extracted for buildings which were documented in multiple videos. Images were extracted for 1,752 building locations on Dauphin Island.

The extracted photographs were used to train a deep learning image classification model. The model included three class labels: elevated building, nonelevated building, and unknown class. The model is intended to locate buildings vulnerable to storm surge, which can be catastrophic for nonelevated buildings. The unknown class label accounts for images where no building is visible; without this class, images with no visible buildings would be classified as elevated or nonelevated, leading to decreased model performance. The existence of images with no building present is due to the maximum distance boundary specified for the videos. Buildings on the west side of the island are typically located farther from the roadway than buildings on the east side of the island. The maximum distance provided to the EEVC tool was large enough to capture buildings on the west side, but in cases on the east side of the island images were extracted from roadways where vegetation occluded visibility. Additional obstructions, such as vehicles in dual carriageways, sometimes occluded building visibility. Figure 8 displays images representing each class label.

The initial deep learning model was trained using 50 images per class. A classification test was run on a set of 120 images. The test set was created by collecting images representative of the image variation encountered on the island, with an equal representation between classes. Images collected from the west side of the island typically show buildings constructed on elevated piers with open soft story and sandy soil with little to no vegetation, such as the elevated class image in Figure 8(a). Images collected from the east side of the island show buildings where construction typically varies between elevated piers with open soft story, elevated piers with enclosed soft story, and either slab-on-grade or crawlspace foundations, with grass and trees visible in the images, as shown in the nonelevated class image in Figure 8(b). Images in the unknown class were typically collected from the east side of the island where vegetation occluded building visibility, as shown in the unknown class image in Figure 8(c). Images containing multiple buildings were collected in areas with high building density. In these images, buildings centered in the image seemed to govern predictions, but lower likelihood values were noticed. Images that were incorrectly predicted in the classification test were inspected and an explanation of model inaccuracy is detailed below.

There exists a “semantic gap” between image pixels used in machine learning and semantic concepts perceived by humans [34] that causes inaccuracy in deep learning image classification. Inaccuracy can be difficult to locate and understand because the inception module contains many layers, and the output of each layer is withheld, with only a class prediction and likelihood provided for each image. Understandably, more complex images require more complexity in the image classification model. To illustrate potential sources of error in the model, Figure 9 presents a representative set of incorrectly predicted images.Figure 9(a) shows a nonelevated building which was predicted to be elevated. The columns supporting the walkway have similar geometry to piers used in elevated buildings, which would likely signify an elevated building. Figure 9(b) shows an elevated building which is off center in the image. The center of the image contains a high concentration of trees, commonly seen in unknown images, likely leading to the unknown prediction. The lower Figure 9(c) shows an elevated building classified as a nonelevated building. There are no discernible piers, and the sides of the building are not visible. Therefore, building geometry is evident but no distinguishing geometry would signify elevation in the classification algorithm, leading to a prediction of nonelevated building.

The natural variation in building design and construction material is expected to require a model with many training images to produce an accurate deep learning model. To increase model accuracy, an iterative approach to retraining was used. Images that were incorrectly predicted by the model trained on 50 images per class were manually classified and used to retrain the model. Two retrained models were created, using 75 images per class and 100 images per class. Figure 10 presents the precision-recall curves for the three models. A decreasing trend in model performance is seen as the number of images in the training set increases. As training sets increase in size, image variation is introduced for each class which may cause disrupt model classification instead of increasing model performance. At a certain point, the model will be able to isolate the image signal in the presence of increased variation, which will lead to increased model performance.

Deep learning approaches typically require a large number of training images to reach a desired accuracy. The ImageNet project, designed to promote and enhance deep learning image classification by creating a large-scale image ontology, defines a set of 1,000 labels with an average of 500–1000 images per classification [35]. Therefore, the comparatively small set of images available for model training in this case study is insufficient to develop a high-performing deep learning model. Alternatively, the ability to capture large image datasets describing community infrastructure through an efficient data collection methodology and storage in a public portal provides the fundamental image data that researchers need to arrive at these high-performing models. Additionally, deep learning models have shown increased performance in image classification recently. For example, the annual ImageNet Large Scale Visual Recognition Challenge resulted in significant image classification error reduction from 2010 to 2015 [36]. Increased performance in emerging deep learning model architectures could facilitate the use of the models in vulnerability analyses.

Spatial analysis can be conducted with the information created in the data collection and deep learning approaches to support vulnerability analysis. Figure 11 presents the spatial results of the manual and deep learning approaches to building classification on Dauphin Island. The manual classification presented in Figure 11(a) shows that the west side of the island contains only elevated buildings, while the east side contains a mixture of elevated and nonelevated buildings. The deep learning predictions from a model trained on 100 images are presented in Figure 11(b). In general, the model predicted mostly elevated buildings on the west side of the island and a mixture of elevated and nonelevated buildings on the east side. Buildings in the unknown class prediction were excluded as they are not useful in vulnerability analysis. Buildings that were predicted to be unknown would require manual inspection to determine the building type, but the inspection process could be planned efficiently using the spatial results of the model. Elevated buildings incorrectly predicted to be nonelevated are problematic in a vulnerability analysis because they incorrectly signify higher vulnerability. The incorrect prediction becomes more problematic when nonelevated buildings are predicted as elevated, because the comparatively higher level of vulnerability for these buildings would not be understood. Figure 11(c) shows the locations of the incorrect predictions. Red locations indicate nonelevated buildings predicted as elevated and yellow locations indicate elevated buildings predicted as nonelevated. The images at these locations can be labelled and used in the next iteration of deep learning model training.

Most elevated buildings on Dauphin Island are single-family residential structures. While a large percentage of the nonelevated buildings are also single-family residential, many are commercial, multifamily residential, government, and other building types. Disaster impacts to the nonresidential building types have the potential to trigger cascading effects in the community. For example, damage to commercial buildings causing business disruptions can impact employment for residents of the island, and damage to schools can cause educational disruption and social problems for affected children. These represent a few of the community interdependencies which can be adversely affected by the disaster of flood vulnerability for nonelevated buildings.

3. Conclusions

Understanding the vulnerability of communities and measuring their changes through time are important for community leaders, governmental decision-makers, industry, and community stakeholders when creating plans for mitigation activities. A methodology to rapidly collect large, passive datasets in a spatial and temporal format was presented. The Extreme Events Web Viewer was created to store the collected data in a spatial, temporal, and publicly available format and add value to the data through analysis techniques. The Extreme Events Video Capture tool was created to facilitate extraction of image data from passively collected, vehicle-mounted 360° video. A deep learning application employing the Google TensorFlow architecture with the inception image classification model was created to obtain information from the extracted image data. A case study was presented to showcase how these tools could be combined to assess the vulnerability of buildings on Dauphin Island, Alabama. Reconnaissance was conducted on the island by driving accessible streets to document buildings. The collected data were geolocated and uploaded to the Extreme Events Web Viewer, and building images were extracted from the videos. A set of deep learning models was trained to classify building images as elevated, nonelevated, or unknown. Buildings on the island were manually classified, and a geospatial analysis of the deep learning model results was presented. An approach to quickly determine where incorrect classifications occurred was presented to show how the geospatial nature of the presented methodology would facilitate an iterative approach to deep learning model creation. The results of the case study conclude that the data collection, storage, and extraction approaches support deep learning model creation. An iterative approach to model training is required to increase accuracy, and the model should be trained with images collected at varying daylight and seasonal stages and across communities.

The vulnerability assessment for Dauphin Island was restricted to a single vulnerability indicator for a single built environment system. The passive data collected in the methodology allow many systems of the built environment to be captured, including transportation and distribution networks and erosion control measures, among others. Deep learning image classification models created to assess vulnerability in these systems and track temporal changes due to vulnerability mitigation or disaster impacts could lead to increased community resilience. The ability to store large, passively collected data in the Extreme Events Web Viewer and extract information from the datasets using automated approaches for assessing and measuring changes to these distinct, interdependent systems vulnerable to disasters will provide information where and when needed in a format that is available to the broader research community as well as decision-makers and community stakeholders to meet the needs of the NSB recommendations and NSTC grand challenges.

Data Availability

Data used in this study can be accessed at

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


This research has been jointly funded by The Center for Sustainable Infrastructure and The Alabama Center for Insurance Information and Research at the University of Alabama. The authors are thankful for the support from these research centers as well as the support from The Center for Advanced Public Safety at the University of Alabama.


  1. NSB (National Science Board), “Long-lived digital data collections: enabling research and education in the 21st century,” June 2018, View at: Google Scholar
  2. NSTC (National Science and Technology Council), “Grand challenges for disaster reduction,” June 2018, View at: Google Scholar
  3. G. Brando, D. Rapone, E. Spacone et al., “Damage reconnaissance of unreinforced masonry bearing wall buildings after the 2015 Gorkha, Nepal, earthquake,” Earthquake Spectra, vol. 33, no. S1, pp. S243–S273, 2017. View at: Publisher Site | Google Scholar
  4. G. Brando, G. De Matteis, and E. Spacone, “Predictive model for the seismic vulnerability assessment of small historic centres: application to the inner Abruzzi Region in Italy,” Engineering Structures, vol. 153, pp. 81–96, 2017. View at: Publisher Site | Google Scholar
  5. H. Yu, M. A. Mohammed, M. E. Mohammadi et al., “Structural identification of an 18-story RC building in Nepal using post-earthquake ambient vibration and lidar data,” Frontiers in Built Environment, vol. 3, p. 11, 2017. View at: Publisher Site | Google Scholar
  6. D. Zekkos, M. Clark, M. Whitworth et al., “Observations of landslides caused by the April 2015 Gorkha, Nepal, earthquake based on land, UAV, and satellite reconnaissance,” Earthquake Spectra, vol. 33, no. S1, pp. S95–S114, 2017. View at: Publisher Site | Google Scholar
  7. A. J. Graettinger, D. Grau, J. Van De Lindt, and D. O. Prevatt, “GIS for the geo-referenced analysis and rapid dissemination of forensic evidence collected in the aftermath of the Tuscaloosa tornado,” in Proceedings of Construction Research Congress 2012: Construction Challenges in a Flat World, pp. 2170–2179, West Lafayette, IN, USA, May 2012. View at: Publisher Site | Google Scholar
  8. A. G. Kashani, P. S. Crawford, S. K. Biswas, A. J. Graettinger, and D. Grau, “Automated tornado damage assessment and wind speed estimation based on terrestrial laser scanning,” Journal of Computing in Civil Engineering, vol. 29, no. 3, Article ID 04014051, 2014. View at: Publisher Site | Google Scholar
  9. E. D. Kuligowski, F. T. Lombardo, L. T. Phan, M. L. Levitan, and D. P. Jorgensen, Final Report, National Institute of Standards and Technology (NIST) Technical Investigation of the May 22, 2011, Tornado in Joplin, Missouri, National Institute of Standards and Technology, Gaithersburg, MD, USA, 2014, No. National Construction Safety Team Act Reports (NIST NCSTAR-3).
  10. D. O. Prevatt, J. W. van de Lindt, A. Graettinger et al., Damage Study and Future Direction for Structural Design following the Tuscaloosa Tornado of 2011, University of Alabama, Tuscaloosa, AL, USA, 2011.
  11. D. O. Prevatt, J. W. van de Lindt, E. W. Back et al., “Making the case for improved structural design: tornado outbreaks of 2011,” Leadership and Management in Engineering, vol. 12, no. 4, pp. 254–270, 2012. View at: Publisher Site | Google Scholar
  12. B. J. Adams, J. A. Womble, M. Z. Mio, J. B. Turner, K. C. Mehta, and S. Ghosh, Collection of Satellite-Referenced Building Damage Information in the Aftermath of Hurricane Charley, Natural Hazards Center, Boulder, CO, USA, 2004.
  13. T. Comes and B. Van de Walle, “Measuring disaster resilience: the impact of hurricane sandy on critical infrastructure systems,” ISCRAM, vol. 11, pp. 195–204, 2014. View at: Google Scholar
  14. K. R. Gurley, D. B. Roueche, G. Wong-Parodi et al., Survey and Investigation of Buildings Damaged by Category III Hurricanes in FY 2016-17–Hurricane Matthew 2016, Florida Department of Business and Professional Regulation, Tallahassee, FL, USA, 2017.
  15. F. Lombardo, D. B. Roueche, R. J. Krupar, D. J. Smith, and M. G. Soto, “Observations of building performance under combined wind and surge loading from hurricane Harvey,” in Proceedings of AGU Fall Meeting Abstracts, Orleans, LA, USA, December 2017. View at: Google Scholar
  16. A. De Mauro, M. Greco, and M. Grimaldi, “A formal definition of big data based on its essential features,” Library Review, vol. 65, no. 3, pp. 122–135, 2016. View at: Publisher Site | Google Scholar
  17. FEMA (Federal Emergency Management Agency), “Purchasing flood-prone property,” June 2018, View at: Google Scholar
  18. E. O. Michel-Kerjan, “Catastrophe economics: the national flood insurance program,” Journal of Economic Perspectives, vol. 24, no. 4, pp. 165–86, 2010. View at: Publisher Site | Google Scholar
  19. MM Council, Natural Hazard Mitigation Saves: An Independent Study to Assess the Future Savings from Mitigation Activities, vol. 68, National Institute of Building Sciences, Washington, DC, USA, 2005.
  20. S. L. Cutter, L. Barnes, M. Berry et al., “A place-based model for understanding community resilience to natural disasters,” Global Environmental Change, vol. 18, no. 4, pp. 598–606, 2008. View at: Publisher Site | Google Scholar
  21. C. A. F. Ezequiel, M. Cua, N. C. Libatique et al., “UAV aerial imaging applications for post-disaster assessment, environmental management and infrastructure development,” in Proceedings of 2014 International Conference on Unmanned Aircraft Systems (ICUAS), pp. 274–283, IEEE, Orlando, FL, USA, May 2014. View at: Publisher Site | Google Scholar
  22. J. F. Zumberge, M. B. Heflin, D. C. Jefferson, M. M. Watkins, and F. H. Webb, “Precise point positioning for the efficient and robust analysis of GPS data from large networks,” Journal of Geophysical Research: Solid Earth, vol. 102, no. B3, pp. 5005–5017, 1997. View at: Publisher Site | Google Scholar
  23. P. August, J. Michaud, C. Labash, and C. Smith, “GPS for environmental applications: accuracy and precision of locational data,” Photogrammetric Engineering and Remote Sensing, vol. 60, no. 1, pp. 41–45, 1994. View at: Google Scholar
  24. OpenStreetMap, “OpenStreetMap,” June 2018, View at: Google Scholar
  25. NHERI (Natural Hazards Engineering Research Infrastructure), “Five year science plan: multi-hazard research to make a more resilient world,” July 2017, View at: Google Scholar
  26. D. Cireşan, U. Meier, and J. Schmidhuber, “Multi-column deep neural networks for image classification,” 2012, arXiv preprint arXiv:1202.2745. View at: Google Scholar
  27. M. Abadi, P. Barham, J. Chen et al., “Tensorflow: a system for large-scale machine learning,” in Proceedings of 12th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2016, vol. 16, pp. 265–283, Savannah, GA, USA, June 2016. View at: Publisher Site | Google Scholar
  28. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826, Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  29. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of AAAI, vol. 4, p. 12, San Francisco, CA, USA, February 2017. View at: Google Scholar
  30. C. M. Janasie and S. Deal, “Increasing climate resilience on Dauphin Island through land use planning,” June 2018, View at: Google Scholar
  31. NWS (National Weather Service), “Hurricane Katrina photos, Dauphin island and Mon luis island (southern Mobile County),” June 2018, View at: Google Scholar
  32. R. A. Morton, “Historical changes in the Mississippi-Alabama barrier-island chain and the roles of extreme storms, sea level, and human activities,” Journal of Coastal Research, vol. 246, pp. 1587–1600, 2008. View at: Publisher Site | Google Scholar
  33. NPR (National public Radio), “Alabama’s Tiny Dauphin island Cleaning up after hurricane Nate’s wallop,” June 2018, View at: Google Scholar
  34. J. Wan, D. Wang, S. C. H. Hoi et al., “Deep learning for content-based image retrieval: a comprehensive study,” in Proceedings of the 22nd ACM International Conference on Multimedia, pp. 157–166, ACM, Orlando, FL, USA, 2014. View at: Publisher Site | Google Scholar
  35. J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. Fei-Fei, “Imagenet: a large-scale hierarchical image database,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2009 (CVPR 2009), pp. 248–255, IEEE, Miami, FL, USA, June 2009. View at: Publisher Site | Google Scholar
  36. O. Russakovsky, J. Deng, H. Su et al., “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015. View at: Publisher Site | Google Scholar

Copyright © 2018 P. Shane Crawford et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.