Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article

Review Article | Open Access

Volume 2021 |Article ID 6614002 | https://doi.org/10.1155/2021/6614002

Vanita Jain, Dharmender Saini, Monu Gupta, Neeraj Joshi, Anubhav Mishra, Vishakha Bansal, D. Jude Hemanth, "A Comprehensive Review on Design of Autonomous Robotic Boat for Rescue Applications", Mathematical Problems in Engineering, vol. 2021, Article ID 6614002, 17 pages, 2021. https://doi.org/10.1155/2021/6614002

A Comprehensive Review on Design of Autonomous Robotic Boat for Rescue Applications

Academic Editor: Ricardo Aguilar-Lopez
Received15 Dec 2020
Revised31 May 2021
Accepted08 Jun 2021
Published22 Jun 2021

Abstract

New technologies are advancing and emerging day by day to improve the safety of humans by making use of various autonomous technologies. The continuous utilization of autonomous vehicles/systems in search and rescue (SAR) operations is a challenging research area particularly for marine-based activities. A comprehensive systematic literature review (SLR) providing an overview of improvisations that have been done in the field of autonomous technologies for search and rescue operation over the last five years has been compiled in this paper. A methodology for using autonomous vehicles in water for SAR operations has been incorporated and demonstrated. The focus of this study is to look at the various techniques and address different challenges faced for human beings’ safety during rescue operation. The comparison of results achieved for various technologies and algorithms is highlighted in this paper. This literature survey proves to be a good source of information for fellow researchers to precisely analyze the study results.

1. Introduction

The response to any mishaps should be made in such a way that the victims are reached as soon as possible along with taking care to avoid additional collapse and damage. The challenging part is that the victims and the rescuers should both be safe; the foremost and primary goal is to save lives. In this world of emerging technologies and automation, robots can prove to be a useful asset to meet this goal either by interacting directly or indirectly with victims or by supporting protection equipment. The main task of the rescue team is to search for the human survivors on the incident site, and it is a hazardous task that often leads to the loss of lives. However, the introduction of autonomous robotic rescue [1, 2] devices can prove valuable for humans and save their lives. Therefore, in this paper, the focus will be on studying and reviewing various unmanned SAR technologies, mainly for water bodies for detecting, locating, and rescuing humans. Many research works are going on for developing SAR technologies.

For centuries, the biggest fear of people was the possibility of falling into water bodies. Presently, several different technologies solve this problem. Although there is plenty of protection equipment like trailing lines, man overboard detection transmitters, chase boats, rescue buoys, and life rings to provide safety, a significant drawback of these devices is that they work only in some specific situations. Due to human limitations or rescue equipment drawbacks, a new device or product is needed to save the lives of people who work in the accident-prone marine area. Due to large ships’ movement and their ongoing trade, supervising and protecting staff on board are difficult.

In this paper, the newest advancements and techniques in the field of autonomous technologies for search and rescue operation that have proved fruitful are reviewed and incorporated. The work in this paper would help other fellow researchers to find more innovative and precise methodologies to achieve desirable results.

This paper is organized as follows. Section 2 comprises various methodologies that have been used to do the systematic literature review, and Section 3 presents previous work that has been done in this field. Section 4 provides an overview of the system design for the autonomous robotic rescue boat and the hardware associated with it. Section 5 demonstrates the discussion. Contributions and novelty of the paper are described in Section 6. Section 7 deals with the applications in brief. Finally, Section 8 concludes the literature review by analyzing the different research papers.

2. SLR Methodology

In today’s world, where data is viewed as the “new oil,” SLR has proven to be of great help to researchers all over the world. Sometimes, it may become a little overwhelming to consume all the data currently existing for a specific domain. There is enormous data and literature in the field of autonomous robotic rescue vehicles, so it becomes a little difficult to summarize and stay updated with the newest methodologies and technologies because very few SLRs have been conducted in this field until now. With this domain gaining popularity, it is necessary to conduct a literature review, so the authors of this paper have analyzed the research in this domain for the past five years using the six most popular digital libraries. The search questions were formulated, and then the results were combined with the critical challenges faced during the review process.

2.1. Search Source

Digital libraries these days are the most used and reliable source for books, journals, and various articles from scholars and researchers worldwide. In this literature review, six renowned digital libraries have been considered. Following are the digital libraries that have been used to extract data:(1)Science Direct(2)Springer(3)Research Gate(4)IEEE(5)Web of Science (WoS)(6)arXiv

The review in this paper has been restricted to the studies done during the last five years, i.e., from 2016 to 2020.

2.2. Search Questions

It is essential to precisely understand what questions would be answered in this paper after doing the complete literature review. The questions have been formulated so that the answers obtained after each question are accurate and precise and have no unnecessary information. The following questions have been answered:(1)Techniques used in autonomous vehicles for communication between the command center and the victim(2)Challenges in autonomous rescue boat(3)Navigation of the rescue boat to the desired location

2.3. Search Query

To get familiar with autonomous robotic rescue boat, a quick review of some articles falling under this topic was done. Some popular models and methodologies under this topic that are correct and relevant, which would help us research, have been considered. The different search query questions and the number of research papers found in each of the six digital libraries are shown in Figures 14 along with the relevant papers. Overlapping research papers found in a previously explored library have not been added to the total number of relevant and useful research papers for other libraries.

3. Previous Studies

In paper [1], the authors studied the robust manoeuvring ability of an autonomous surface vehicle. They have used an optimized sparse optical flow for detection of objects and faster computation, which is robust and reliable. Due to the small size of the vehicle, it is mostly safe, and thus collision issues can be avoided. In their paper, not only the detection uses the color information of objects, but also the shape of objects and movements are considered. Controlling the entire autonomous navigation is cumbersome and challenging for achieving a good skilful ability. The authors have proposed an improved algorithm with the capability of adaptive estimation to obtain reliable navigational information.

The authors of [2] discussed the impact of waves in the water body on the track of autonomous robotic rescue vehicle, which leads to low manoeuvrability. They have presented an autonomous unmanned aerial vehicle (UAV) which utilizes global navigation satellite system techniques along with computer vision algorithms. The authors in [2] have addressed the problem of emergency services during the time of crisis. The highly intensive wave occurs when the robotic rescue boat moves at a very high speed from source to the endpoint, which causes a negative effect on the quality of the images taken from the camera, so the authors proposed a system to nullify this effect.

The noises generated due to high camera vibration of autonomous vehicle caused by waves and less robust lighting lead to failing obstacle detection, low camera quality, and slow processing, resulting in blurry images and deteriorating the autonomous robotic rescue vehicle navigation. To overcome these problems, the authors of [3, 4] have designed an autonomous system with the information and communication technologies (ICT), capable of withstanding large waves and increasing autonomous systems stability. The autonomous vehicle is developed with a competent processor for sensing in real time. With the increasing demand for higher data rates, data integrity, real-time communication, and robustness, the effect of ICT has been reflected in their paper. Moreover, to overcome the noise effect, the authors used an optimized optical-flow-based algorithm to detect objects such as the edges of the track where robots have to be moved. More specifically, the region of interest is set at the bottom of the image to avoid undesired objects, such as sky, trees, and people. The defined region of interest mainly affects image processing speed because it only processes a specific image.

The algorithms discussed in [59] do not rely solely on using the information of colors but also using the movement and shape of the object; due to this fact, they provide object detection that is more robust to noise. Thus, it is more robust to lighting, blur, and contrast. In [10], the authors have used a robot with a dimension of 120 × 42 × 17 cm, made up of fiber glass, named PENShip (Politeknik Elektronika Negeri Surabaya Ship); this design has the main goal of completing the task assigned to it. PENShip hull form design adopted Catamaran with pontoon type. The back of the frame is equipped with one motor and rudder (activator), which facilitates the boat’s manoeuvring. PENShip is equipped with only a camera sensor, which is lighter than other sensors. In [11], the authors described the SAR missions and the involved methodologies and operations.

The scenario during a disaster is often unpredictable, unstructured, and time varying, which means that there are many challenges for the successful implementation of the unmanned vehicle [1214]. For executing these operations, many robots of varying sizes and functionalities are required, and this can be resolved by incorporating a diverse team of robots, collaborating dynamically as an interoperable team. This includes a comprehensive analysis of related authentication programs in multirobot systems and their implementation in real time [15, 16].

These days, a wide variety of autonomous vehicles are used for different operations in many different fields, and this number is going to increase in the subsequent years. Many large-scale systems aim to solve these problems, but they are quite costly and not very useful too. Research has been done focusing on the interaction between these systems and implementing various operations [17, 18]. The authors of [19, 20] have proposed different automation classes and offer them as a starting point in the field of search and rescue. To make even a single robot, it takes much vigorous effort for its integration, and each robot is developed by a different provider utilizing its design, framework, and middleware [2123]; the system issues are related to the increasing complexity, and the heterogeneity of capabilities of robots and their coordination. The robotic systems require a ground control station because they have their commands and specific protocols despite the several standards proposed by society. These protocols make integration between systems difficult. The papers [2426] use active modeling for the heterogeneous robotic system to provide higher accuracy. Lack of proper protocols puts additional pressure on the operation and maintenance of these multivehicle systems. The authors mainly emphasized achieving robot interoperability, enabling robots to work with full energy to perform the assigned operations [2729]. The task described is to develop a diverse fleet with a unified and seamless integrated group of autonomous air, land, and marine vehicles. Substantial efforts have been made to evaluate existing work for the standardization of robot systems. Given the specific requirements, priority is given to organizations which consider various fields (air, land, and sea) [30]. Similarly, when considering the platforms used, the priority is given to smaller and lighter platforms in terms of standards and methods. Several measures have been taken to address both issues. However, integrating them is difficult and not yet feasible. A proper standard for integration that is merely compatible with large and small systems is still required [31, 32]. However, somehow large robots’ systems cause strict collaboration as they require high computational time. The main focus is on the selection of the appropriate form of robot system based on the evaluation of the multirobot search, its application to additional measures, recommendations for improvements, and compatibility and performance with all the robots, which requires an optimized algorithm to perform computation in the least amount of time; the algorithms/methods described in [3335] are so far optimized. The applications of these autonomous vehicles are increasing day by day and becoming an indispensable tool in a wide range and in almost every aspect of human life such as SAR, GPS, automated cars, identifying pollutants, and radar sensors [36, 37]. However, the authors have also mentioned the challenges faced like environmental conditions, which affect the overall communication, with more extended sensor response. The goal is to identify a source object of interest (e.g., the chemical composition in the ocean) using the appropriate autonomous vehicles. Therefore, in this paper, the aim is to examine some of the latest advancements that have been made in the search and rescue technologies using autonomous vehicles and algorithms like hex-path algorithm and planarian algorithm for path optimization. However, the paper [38] considered the oil spill, heavy rain, and collisions as the most disastrous, which cause a critical impact on marine life. The authors have considered an intelligent system consisting of a model base, environmental disaster modeling, decision support, and intelligence techniques like heuristic search algorithms and machine learning to strengthen the operating mechanism to eliminate the potential impacts. Simultaneously, the authors of [39] have developed a method of using laser remote sensing for oil spill detection. The samples obtained in this method are analyzed in the laboratory using parallel factors analysis (PARAFAC) and time-resolved fluorescence.

The authors of [40] have used a hybrid approach for path planning of robots. The algorithms used by the authors are artificial potential field (APF) and enhanced genetic algorithm (EGA). The APF algorithm is used to determine all feasible paths between the start and destination locations. Simultaneously, EGA is developed to enhance the original paths in continuous space and obtain the optimal path between start and destination locations. Autonomous navigation of a robot is a likely study area due to its wide applications. The robot’s navigation [41] consists of four fundamental terms known as perception, localization, cognition and path planning, and motion control, among which path planning is the most significant part. The authors have proposed different path planning techniques, like classical methods and heuristic methods. They have also proposed heuristic-based algorithms which use the neural network and hybrid algorithms, while the paper [42] concentrated on presenting a correlation between conventional motion planning approaches such as potential field (PF), Dijkstra’s algorithm (DA), metaheuristic-based methods such as genetic algorithm (GA), particle swarm optimization (PSO), differential evolution (DE), and cuckoo search (CSA).The authors of the papers [26, 43] have conducted research on the unmanned surface vehicles (USVs) considering both natural or human-made disasters. The work describes the deployments in disaster scenarios by considering the technical aspects of USV hardware and software. The paper [44] presented the features of the unmanned surface vehicle. The authors have used the label-A algorithm to improve the manoeuvring characteristics and enhance motion planning for the vehicle. The algorithm for motion planning comprises the following stages: first, the vehicle trajectory is established using its manoeuvring characteristics; second, an advanced label-A algorithm is formed. The authors in [45] have used the cooperative multirobot systems (CMRS) by establishing a wireless connection with the robot for its operation. Further, the mobility of robots can be provided using its sensing functionality. The authors in [46, 47] have used a PIR sensor-based autonomous rescue robot which can identify a human being from an unreachable point of the accident area. They have used joystick, RF technology, an ultrasonic sensor for proper navigation of the robot. IP camera is also combined with examining and analyzing conditions that will facilitate human detection. The papers [48, 49] have presented the algorithmic aspects for multirobot coordination and perception by studying the heterogeneous SAR robots in various environments. They have also discussed the coordination and interoperability in heterogeneous multirobot systems and various multirobot SAR systems by considering machine learning. The authors in papers [5052] demonstrated the autonomous navigation aspects to avoid collisions with other objects. It should be able to detect obstacles and perform suitable manoeuvres automatically. The method used for target detection involves the use of laser rangefinders. The main limitation of this approach is poor environmental conditions with lousy visibility. Therefore, the authors have presented an approach that describes automotive three-dimensional radar. The study aims to assess object detection possibilities based on a comparison with images obtained. Several suggestions on previous events in this area have focused on developing and applying robots for environment monitoring. The authors of [5355] presented an approach to operating USVs in a changing marine environment, including obstacles and sea surface currents. The work described a search technique called the Dijkstra algorithm to determine the motion planning for a USV moving in a maritime environment considering static and moving obstacles. The approach’s performance is tested in simulations with path length and elapsed computational time. This approach showed the effectiveness of the global path. The paper [56] aimed to develop optimal motion planning for an autonomous surface vehicle comprising a sensor to maximize the sensor-related information. The vehicle practices a nonlinear distribution model of the pollutant source to determine its level. The degree of detection of particles depends on the corresponding distance between the vehicle and the source. The authors of [57] have used a probabilistic map of the source position developed through the sensor information for dynamic motion planning. They have used an online nonlinear Monte Carlo algorithm for obtaining sensor information regarding pollutants at different locations. The authors described an approach for plume tracing using autonomous surface and underwater vehicles. The vehicles are provided with chemical sensors and acoustic modems and flow to collect the samples. A detailed analysis of odor detection techniques and data sampling of autonomous vehicle methods are shown in the papers [5860]. They have shown a surface-enhanced Raman scattering (SERS) odor compass, a lattice design of SERS sensors, and machine learning techniques to identify multiple odor sources. They have achieved the best accuracy using a convolutional neural network and support vector for a multiple odor source. Sensors are essential for the localization of source, and the means of sensing have a comprehensive scope [6163]. The authors of [64, 65] have used the multirobot systems to significantly increase the efficiency of SAR robot with a faster search of victims, providing the real-time monitoring and surveillance of SAR operations. The SAR operations include a variety of conditions and situations, and collaborative multirobot systems can provide the most benefits. The paper [66] shows the advances on the multirobot SAR support from an algorithmic view and on the methods enabling collaboration among the robots.

The papers [6769] discussed a powerful toolkit that has been used by the search and rescue workers to handle calamities. The accuracy of methods used in these papers proves to be ineffective in this fast-pacing world. An analysis of various tools used in search and rescue operations has been provided by the authors. The authors have also discussed the project, which uses many tool sets to perform the search and rescue in a short amount of time. The system consists of tools like assistive vehicle sensors. The devices are developed and integrated into the command-and-control equipment and supportive tools to use the system effectively.

Recent studies [7072] on calamities in specific location of the world have caused difficulties in managing these crises. Thus, this destruction causes a more impact on living and the economy that are quite difficult. However, there are still many bottlenecks, which prevents the successful implementation of these unmanned tools on the practical terrain. In a place of crisis, searching for the human survivors becomes the priority task of the rescue services. The search and rescue operations are hazardous tasks that can also lead to the loss of workers themselves. An efficient system, either auto or manually controlled, can be used in such events to provide the rescue operations to save human lives [7375]. Many robots are being used mainly for rescuing in such situations [76].

The search and rescue operations are not friendly; they can be deployed quickly [77]. The robotic tools should be used not to eliminate workers’ intervention, but first to ensure human safety. Various factors are concerned in the search and rescue operations; the major one is time, which cannot be ignored. The motive of all the search and rescue teams is to perform the services as quickly as possible to eliminate further danger to human lives [78]. However, traditional methods of rescuing assets sometimes lead to overloading like rescue boats, which also can cause another crisis. Therefore, by using the robotic tools, the searching operation can significantly speed up. These tools also have reduced the time taken by the workers to get an overview of the infected site, and thus workers can plan quickly without getting themselves into danger. In [79], there are still some issues that need to be resolved to make the system perform multiple tasks. For the crises involving biological or chemical agents, using the robotic system will be more beneficial. There are such circumstances where robotic systems might not work like heavy rain or wind; operations have to be stopped to rescue services that need to be stopped at night. However, these robotics tools can be extended to withstand these types of actions by modifying the system.

From past research works, robotic tools have been widely improving for search and rescue operations, but some flaws restrict the full usage of these systems without human intervention [80]. In paper [81], researchers proposed that presently there is extreme interest in deploying these unmanned vehicles or robots to handle challenging cases. An intelligent path algorithm is required to proceed in an efficient way [82, 83]. An algorithm to perform multiple-task allocation was developed using a self-organizing map, reducing the collisions. This algorithm generates an array to store all the obstacles to find the least obstacles path in a short amount of time [84]; it uses that array in determining the optimal trajectory to the destination. The algorithm is validated by operating on the different simulation tools, which facilitate the real type of environment to find the algorithm’s accuracy.

Recently, there is growth in the research of USVs. More numbers of USV vehicles have been used in various operations like those in the military. The limitation of [85] is that the USVs used lack sophisticated sensors. The chances of risk to the workers have also been reduced by adopting these technologies. The extensive usage of USVs is in a polluted lake where they are being used to collect the data by autonomously visiting the location in the lake with the least amount of distance traveled. Thus, time to these operations has also been reduced a lot by adding this system. The authors of the paper [86] have discussed the optimizing algorithm having a low complexity order and also proposed the local minima problem. These different location points can be treated to find a collision-free path for multiple goals [52, 87]; the challenges faced include local object avoidance, and different algorithms are presented for efficient navigation. Therefore, using the task allocation algorithm, the optimized way can be obtained, and by the path planning algorithm, the collision-free path can be calculated.

Unmanned aerial vehicles (UAVs) [88], drones, optical sensors, and radio frequency (RF) module can be used to know the attitude information and understand the status and purpose of the problems faced by the authors like the high cost for designing and the estimation algorithm, which they have overcome by using a quadrotor 6D pose estimation algorithm which is based on critical point detection. It is based on the keypoint detection, perspective-n-point algorithm, and graph network, which provide the highest performance in simulated and real conditions. Currently, the object 6D pose estimation algorithm relies on a very accurate pose interpretation or three-dimensional target model, which requires many human resources and is difficult to apply to uncooperative targets. A quadrotor 6D pose estimation algorithm can be used to overcome these issues. The relational graph predicts the network’s predictive power for the main components of the four motors. The accuracy and speed can be significantly improved compared to the most advanced keypoint detection algorithms.

Nowadays, unmanned aerial vehicles are being used to view the infected area, but there is a significant collision concern [89]. However, the limitation of these types of methods is that they are ineffective in overcrowded areas, and they are also not successful in different types of subjects. By correcting this, accuracy can be increased even more. Thus, to avoid collisions, precise sensors can facilitate the track of the drone to its destination. In that study, they have shown how effective object detection can be done with CNN to improve collision avoidance. However, they have not taken into account the time domain, which could have improved the performance even more; moreover, the testing should have been done on real-life videos instead of random images. Over the past years, many kinds of research have been conducted to detect drones [90]. The proposed model uses the IPM method, unlike the typical vision-based algorithms, which depends on the tracking of features. This IPM method is much more effective, especially in cases where the camera is close to the ground, which makes the tracking of features a cumbersome process.

The authors of [91] have proposed a solution based on the convolutional neural network (CNN) model to detect the objects by drones. However, the methods distinguish shape and motion characteristics from small flying drone distances; this way, objects can be tracked or skipped effectively. The challenges faced are primarily autonomous track and object detection. The randomly generated six points can be used in the simulated virtual environment; the PNP algorithm’s accuracy can be verified. Although the collision detection and warning system is sound, it still has its limitations such as confusing backgrounds, for example, when the aircraft is against the ground or when part of the scene is below the horizon; all of this could result in a nasty collision avoidance system because of the high false-positive rate. Researchers proposed an off-board quadrotor pose estimation method. They used four LEDs in the quadrotor pose estimation method and infrared cameras to measure the quadrotor’s posture. There are some methods to measure an object; one is to study objects directly, also known as orientation learning; another is to first use vital points and then apply the PNP algorithm [92]. Orientation learning uses non-Euclidean space, which causes difficulty in obtaining images. Some formal studies have used local features from RGB images to calculate the attitude [93]. The problem with this specific research is that it has been only tested in confined airspace. The model has been trained on a synthetic dataset, which results in improper pose estimation and detection, but this can be removed by using OpenCV on a real image dataset. Recently, in another study, researchers have used machine learning and deep learning to assess the posture of objects [94].

4. System Design Overview Survey

This section shows an overview of an unmanned surface vehicle (USV) and the interaction of all subsystems with each other. Each subsystem comprises various devices that perform the required tasks for the proper functioning of the overall system. Figure 5 depicts the architecture of the system. The hardware and software modules of the system are described in this section.

4.1. Hardware Module

The hardware module consists of majorly different components, namely, Arduino, motor driver, motors, power supply, global positioning system (GPS) module, camera, Raspberry Pi, and RF module. These components are configured for the overall functioning of the system.

4.1.1. GPS

The GPS is used for defining the global position of an object. Different modules of GPS have been compared for better understanding. Table 1 classifies the efficient and commonly used GPS modules used in search and rescue operations.


Types of modulesChipset usedUpdating rate (Hz)

LS23060 [95]MediaTek MT33185
EM-406A [96]SiRF Star III1
SUP500F [97]Venus 63410
Copernicus [98]Trimble TrimCore1
DS2523T [99]u-blox 54

4.1.2. Raspberry Pi

There have been three generations of Raspberry Pi, and every generation has two models, namely, Model A and Model B. The characteristics of both models of Raspberry Pi along with the little deviation in the model are depicted in Figures 6 and 7.

4.1.3. Motor Drivers

The motor drivers used in the system for search and rescue are mainly four types, i.e., AC, DC, servo, and stepper. The choice of motor driver depends on its application and usage. Motor drivers can be controlled directly by connecting the power supply or by devices such as wireless systems and microcontrollers.

DC motor drivers are widely used in many applications due to their enormous advantages. Figure 8 shows the efficiency of most popular DC motor drivers with respect to the supply voltage. The revolving speed of the DC motor drivers with respect to the supply voltage is shown in Figure 9.

4.1.4. DC Motor

From automobiles and robotics, small- and medium-sized motoring applications often feature DC motors for their wide range of functionality. Because DC motors are deployed in such a wide variety of applications, there are different types of DC motors suited to different tasks across the industrial sector. Different types of DC motors are classified in Figure 10.

4.1.5. RF Module

The RF module is used for transmitting and receiving the message signals. It is essential and crucial part of the search and rescue system. A drone can be programmed in such a way that it is capable of sending information such as the location of itself and images of victim to the receiver using RF module wherever it detects any victim face that has fallen in the water and needs help. Figure 11 shows the different types of RF modules.

4.2. Software Module
4.2.1. Face Detection Techniques

There are many face detection techniques in the literature. The authors of this paper have studied the five most efficient and popular face detection techniques to detect the face of the victim which are shown in Figure 12 and described in brief.(i)Haar-like feature-based face detectionHaar cascade algorithm is a machine learning-based algorithm used for detecting objects. The authors in [104 and references therein] have proposed a model to improve the performance of this algorithm. It is done in two steps: Firstly, new features have been defined, also called separate features, for the detector, which adds the do-not-care area between the Haar features, and hence a new feature for the cascade detector can be defined. Secondly, they have improved the detection rate by using the decision algorithm, selecting the best width of the do-not-care area. In this algorithm, the background images are ignored, and left stages are not calculated when the stage rejects an image. However, if the false detection occurs in a stage, it affects the other stages, resulting in unwanted increase in the false rate, which is its main limitation.(ii)Geometric based face detectionThe principal component analysis of face structure modeling is discussed in the paper ([100] and references therein), which limits search space and improves the face detection rate. However, choosing the feature is a significant parameter for improving accuracy and decision rate. Canny edge detection algorithm and principal component analysis are used for visualizing the structure of faces by filtering out the image with its pixel values, thus providing excellent accuracy in limited time complexity. However, this method is not as useful for complex geometrical structures, which can help us identify critical facial features and generalized threshold values.(iii)Improved LBP algorithmLocal binary pattern (LBP) is a face detection technique ([101] and references therein). This algorithm’s accuracy can be improved by using image processing like equalization of the histogram, contrast improvement, image blending, and applying filters to eliminate a few of the problems. Furthermore, to improve the algorithm’s accuracy, the authors have used the following features of the input and reference face images to obtain the best quality images: sharpness, illumination, noisiness, resolution, scale, and pose. The filter which gives the best result is the bilateral filter. The preprocessed input image is then divided into regions, and after that pixel value of each region of the face is calculated. If any adjacent pixel is more extensive than or of the same size as the pixel in the center, it is indicated with binary 1; otherwise, it is indicated with binary 0. Again, this process is repeated for every pixel in the face region to obtain the binary pattern to build a feature vector of the input face images. The limitation of this method is that it does not consider the issue of mask faces and occlusion.(iv)Face detection based on Viola–Jones algorithm applying composite featuresThe Viola–Jones algorithm is a face detection algorithm that instantly and precisely detects objects in images and works exceptionally well with the human face ([102] and references therein). One problem faced while using the original Viola–Jones algorithm is that hard objects reduce the detection rate. To cure this difficulty, the authors proposed a face detection technique based on the Viola–Jones algorithm that uses compound properties. The chief steps involved in this algorithm are as follows:(a)Firstly, from an input image, the human face is detected in a rectangular frame with the help of the Viola–Jones algorithm.(b)Then, the faces present in the rectangle-shaped frame are calibrated, and after that they are processed into four types of subimages.(c)Following that, NLDA (zero space linear discriminant analysis) is used to extract the features obtained from the complete facial picture and four subpictures.(d)Then, all extracted features (local features and global features) are calculated with the discriminant length.(e)Lastly, regions with considerable distance from discriminant values are selected to obtain the latest compound feature vectors. After that, the input is given for face recognition to the classifier.The original Viola–Jones algorithm showed several undetected and false face detection. The main disadvantage of using this method is that it takes a long time to train the model. Secondly, it has reduced NIR detection capability (near-infrared-based face detection) as the distance is increased. It is tested on single face frontal images only. However, the method described in this paper has lower false detection rate and missed detection rate compared to the Viola–Jones algorithm.(v)Face detection using improved faster R-CNNFace detection using deep learning CNN is a rapidly growing technology offering different generations like R-CNN and Fast CNN to compute the accurate result in a limited time. The authors of the paper ([103] and references therein] have used the fully connected faster R-CNN for obtaining excellent performance in a short amount of time. The authors have used ResNet architecture to extract the optimizing features and a feature map to derive the image context. This method seems useful, but the inference speed and the detection performance could have been better by designing a lighter backbone.The five most popular and widely used face detection techniques which have been described are compared on the basis of different parameters in Table 2 to give an insight to the readers. The merits and demerits are also compared in Table 3.


ParameterHaar cascade algorithm [4]Geometric based Algorithm [100]Viola–Jones algorithm with composite features [102]Improved LBP algorithm [101]Faster R-CNN algorithm [103]

PrecisionLowLowVery highHighHighest
Execution timeHighHighLowLowLow
Learning timeHighHighLowHighLow
Ratio between detection rate and false alarmHighLowHighHighVery high


TechniquesMeritsDemerits

Haar cascade algorithm [4]Lesser false alarm part; improved feature extractionComplex to implement
Geometric based algorithm [100]Effective approach; easy implementationLow accuracy; more false alarms
Viola–Jones algorithm with composite features [102]High accuracy rate and low false detection rateSensitive to illumination variations
Improved LBP algorithm [101]Simple to implementNot robust
Faster R-CNN algorithm [103]Very high precisionNo such demerits for this algorithm

4.2.2. Navigation System

Navigation can be done with the help of a grid map after all the areas have been inspected. The navigation planner takes into account to steer through the grid map autonomously from one pose to another. The comprehensive planner traces the pathway from the present position to the final position with the help of the A-Star algorithm. In contrast, the local planner generates linear and angular velocities along the global way while avoiding obstacles based on the cost map parameters. The regional planner practices the dynamic window approach (DWA) to inspect the velocities. The investigational node simply presents the poses (x, y, z, q), and they are then changed into north-east-down (NED) frame. These poses are converted then into motor velocity, which uses PID to navigate and whose instructions are then conveyed to the controller.(i)ROS implementationThe ROS platform is basically made for robots made to be operated on the ground, it has navigation techniques for these kinds of robots, and it is inappropriate for UAVs. Nevertheless, the procedure taken by the ROS is an instruction-based procedure. This execution sample gives the robot its control space and predicts the ways which would end up in a close to perfect indoor mapping with the help of an autonomous vehicle. The authors in [104] have proposed an efficient way to implement a drone that can navigate the victim and pass the signal to the autonomous robotic rescue boat via RF module.(ii)Simulation softwareAll simulations can be performed on the Gazebo software. This Gazebo software has a physics engine which helps us in imitating the real motions of various arrangements of UWSVs and makes it a lot more possible to test in the various kinds of situations. This software is also convenient for adding new sensors, design changes, rapid testing of algorithms, and even quick prototyping. For programming and controlling the USV, robot operating system (ROS) is used. ROS is software that functions as a bridge between some operating system or database and various applications, specifically on a network, for robotics; it therefore provides a software framework which can be used as tool for software development. It therefore provides a large set of libraries which help in providing robots with a primary focus on things such as mobility, perception, and manipulation. It gives us a set of tools with functionalities such as testing, debugging, and envisioning sensor tools and data for networking, multirobot, and other dispersed systems. One more cause for using ROS is its great integration capabilities with the Gazebo simulator.(iii)Estimation and controlAn efficacious Kalman filter [105] will be taken to combine all the sensor data that is coming from the UAV into mono-navigational information to regulate the orientation, velocity, and position of the USV accompanying the sensor fault bias. A set of proportional integration and derivation (PID) controllers were implemented for the regulation of the attitude, twisting or oscillation rate of the system, and speed of the carrier along with heading. The values that have been obtained from the output that contain the torques and thrust are then converted into voltage of motor that gives us acknowledgment same as the real aerial vehicle. The ArduPilot will be put into practise as a flight commander to convert these informative messages into imminent motor voltages and ultimately for the simulation and flying of the vehicle. ArduPilot is open-source software.To get desirable responses such as complex manoeuvres and hovering at a particular place of the aerial vehicle, its parameter for every component can be tuned perfectly to get the desirable output. Other methods for the planning of the path can be executed in the regulatory loop to get a more fluid output from the UAV. These types of software contained in the loop strategy provide us with a larger flexibility in the process of examining the algorithms prior to the real implementation on an actual stage and evade the chance of some kind of injury or damage, at an earlier stage only.

5. Discussion

Autonomous rescue boats that are easy to operate are currently being used by a range of scientists and government agencies to provide rescue operations and assistance in the marine bodies. The micro-USVs can be operated autonomously. They can be configured to communicate through a GPS connection that allows data to be disseminated. Communication plays a fundamental role in the rescue operation. Since connectivity is reliant on the environmental conditions that the system is working in, such as the wave conditions and the weather of that area, localization problems related to weak GPS signals may occur, causing communication problems with the boat, and they must not be underestimated. For better communication, several GPS modules have been used to get a precise location. Therefore, localization techniques used are versatile in case of GPS problems. An additional powerful transmitter embedded with the modules can be used for increasing the range of the system and enhancing the capabilities. In addition, the receiver can be tuned in such a way that the distortion caused in the signal from the unwanted signal is reduced, which further can enhance the communication range of the system. Moreover, in GPS modules, LS23060 has good receiving sensitivity. It has speedy positioning time, positional accuracy, power consumption, time accuracy.

The new Raspberry Pi 3 is the fastest model and is quite cheap. It is essentially a minicomputer of small size that can get connected to a TV or a computer monitor, and makes use of any standard mouse and keyboard. It is a small-sized device that is capable of allowing people of any age to traverse through the area of computing and to program in Python language. TB6612 motor driver has outstanding efficiency; it requires 12.02 V supply, with an efficiency of 95.97%, and RPM is also very high among all the motor drivers. This device is a surface-mount chip available in many standard modules, shields for the Arduino and HATs for the Raspberry Pi.

Among face detection algorithms, the Faster R-CNN algorithm is the best due to the highest precision. It has much less execution time, and the learning time is too short. Moreover, the ratio between the detection and the false alarm rate is pretty large. According to these parameters, it is the best.

Implementing the LS23060 GPS module along with Raspberry Pi 3 and Faster R-CNN algorithm could further make the search and rescue boat more effective and reliable.

6. Contributions and Novelty

There has been an addition in the popularity of boating and other marine-based activities. With the increase in the number of people interested in water-related activities, there has been an ample rise in mishaps. Autonomous rescue boats at sea can be started with the minimum support and are an effective way to perform rescue operations. If any incident happens, a quick response is generated by the system to transmit a message to the autonomous robotic rescue boat. Once the boat gets the message, it starts approaching the place the message came from and, thus, rescues the person.

In manual rescue systems, human rescuers put their life at risk to rescue people who fall into water bodies, but an autonomous rescue boat reduces the risk of loss of a human’s life. In a lot of cases, the rescuers die while undertaking the rescue operation. In autonomous robotic rescue boat, even if some damage caused to the rescue boat while rescuing, it will not be much of an issue because it can be repaired. Therefore, autonomous devices are faster, more accurate, and more responsive than manual rescue systems. Moreover, they can perform even under inclement weather conditions and even in the dark. An autonomous rescue system can also be used by Navy during war when manual rescue systems become impractical.

Autonomous rescue system continues to perform even when the duration of operation elongates as compared to manual rescue system which has limitation on its endurance.

Autonomous boats can always be designed to be highly tough and take considerable loads irrespective of the climatic conditions. Their structure and the base provide durability to them on the water even during unfavorable weather conditions. They are provided with all the necessary first aid equipment that would be required for the person being rescued. Thus, autonomous boats provide a more agile and efficient alternative to manual involvement required in a marine rescue operation.

Not only this paper discusses the various contributions that this project can make in various fields, but it is also of a novel importance because of the following factors which make this project unique in its own way.(a)Less human involvement: since it is an automatic boat, manpower required to operate the system is limited(b)Quick response: this system uses an effective way to reach the place of accident(c)Flexibility under all weather conditions; it can withstand various weather extremes and respond accordingly(d)The system can operate over large areas and distances(e)It integrates monitoring system for rescue operation(f)It is based on highly accurate face detection algorithm for better human recognition

7. Applications of Autonomous Robotic Rescue Boat

(a)Rescue of people from seashores: at the seaside, there are danger zones where life threatening incidents can happen anytime due to high tides or any other activities in the water bodies which can be dangerous. For these incidents, autonomous robotic rescue boats can be deployed at these sites, which are capable of locating the place of accident and then transmitting an alert message to the base station, which then sends the rescue boat to the place of accident.(b)Military, defence, and coastal security applications: the autonomous robotic rescue boat has the capability to navigate and locate an underwater object of interest and then perform further autonomous manipulations as and when required by the respective authorities.(c)Patrolling and minesweeping: autonomous rescue boats can be used for detecting and removing naval mines using various mechanisms and keeping the waterway clear for safe shipping during uncertain times like wars.(d)Natural disaster relief: in case of natural disasters like floods, landslides, and tsunamis, the autonomous robotic rescue boat can be used to detect people who have drowned in the water bodies or have been wounded, and can save thousands of lives.(e)Environmental monitoring systems: they play a very crucial role in disaster relief by detecting and inspecting the critical underwater infrastructure, measuring the destruction, and recognizing various sources of pollution in the harbors and the fishing areas.(f)Inspection of disaster damage: the autonomous robotic rescue boat can be used to survey the damage caused by natural disasters like tsunamis with the help of the fitted camera, or it can be further used for rescuing the victims stranded on a vessel which are sinking or have lost the ability to move.(g)Rescue operations required during water-sport events: with the increase in the number of people interested in water-related activities, the probability of mishap has increased over the years. Autonomous rescue boats can be of great help during these events.

8. Conclusion

The study of autonomous robotic rescue boat is continuously increasing, and the researchers are still working to make this technology more advanced. In this paper, the autonomous rescue boat and the techniques that are governed are reviewed. Several articles have been studied from 2016 to 2020, showing different methods to rescue a person. The review has been carried out in such a way that the papers are differentiated according to the techniques used, the main challenges in autonomous vehicles, and the navigation of the rescue boat. Some papers depicted the fact that the larger the boat’s size, the more the chances that collisions are small sized. Moreover, the small-sized boat trajectory can be easily controlled, and navigation will not become cumbersome and challenging. However, the effect of waves cannot be ignored, which leads to difficulty in precisely estimating the trajectory of the autonomous rescue boat. The low accuracy of sensors and devices also impacts on the images taken from the cameras. Therefore, researchers have mainly addressed the problems faced for achieving excellent manoeuvrability.

The hardware and software part plays an essential role in designing a better rescue boat. Hardware parts like GPS devices (transmitters and receivers), Raspberry Pi, RF module, motor drivers, motors, batteries, and Arduino along with the intensive use of software parts like face detection and various algorithms make the whole process of rescuing feasible and efficient on time. The ROS platform has been used for the implementation of navigation for ground-based robots, and it simulates the trajectories. A drone is used to navigate the victim and pass the signal via RF module to the autonomous robotic rescue boat. The Gazebo software allows doing the simulation, which has the power to replicate real motions of various arrangements of the autonomous boat in different scenarios. An efficacious Kalman filter is used to combine all the data coming from the sensor from the boat into one navigational dataset to regulate the position and speed of the boat. Further expansion for the development for this project should be focused on getting a motor of even more power for the rescue unit. For even lesser delay, the code can be optimized for better and advanced tracking, as well as better noise reduction capabilities and hopefully increase in the range for the victim’s homing, which can tune the receivers to a greater extent. After all these optimization techniques and modifications are added to the system, the system would become much more effective and reliable for the man overboard rescue operations on the actual field environment. The amount of data would never seize to increase, and new data and information will keep on coming, so all the future studies in this area should take into consideration whether the static models are reliable enough when thinking about long-term application or if lifelong training should be thought of more. This review can help other scientists and researchers that are studying this field and encourage them to gather more data and information for their research analysis.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Abdul Haq, M.& Afakh, M. &Sugianto et al., “Towards a robust maneuvering autonomous surface vehicle,” in Proceedings of the Conference: Kontes Kapal Cepat Tak berawak Nasional, Malang, Indonosia, 2016. View at: Google Scholar
  2. G. Ferri, F. Ferreira, V. Djapic, Y. Petillot, M. P. Franco, and A. Winfield, “The euRathlon 2015 grand challenge: the first outdoor multi-domain search and rescue robotics competition-A marine perspective,” Marine Technology Society Journal, vol. 50, no. 4, pp. 81–97, 2016. View at: Publisher Site | Google Scholar
  3. R. Hamsavahini, S. Varun, and S. Narayana, “Development of light weight algorithms in a customized communication protocol for micro air vehicles,” International Journal of Latest Research in Engineering and Technology, vol. 1, pp. 73–79, 2016. View at: Google Scholar
  4. S. Choudhury, S. P. Chattopadhyay, and T. K. Hazra, “Vehicle detection and counting using haar feature-based classifier,” in Proceedings of the 2017 8th Annual Industrial Automation and Electromechanical Engineering Conference (IEMECON), pp. 106–109, IEEE, Bangkok, Thailand, August 2017. View at: Google Scholar
  5. C.-Y. Tsai and S.-H. Tsai, “Simultaneous 3D object recognition and pose estimation based on RGB-D images,” IEEE Access, vol. 6, pp. 28859–28869, 2018. View at: Publisher Site | Google Scholar
  6. S. R. Balaji and S. Karthikeyan, “A survey on moving object tracking using image processing,” in Proceedings of the 2017 11th International Conference on Intelligent Systems and Control (ISCO), pp. 469–474, IEEE, Coimbatore, India, January 2017. View at: Google Scholar
  7. H. Yu, G. Li, W. Zhang et al., “The unmanned aerial vehicle benchmark: object detection, tracking and baseline,” International Journal of Computer Vision, vol. 128, no. 5, pp. 1141–1159, 2020. View at: Publisher Site | Google Scholar
  8. E. R. Stepanova, M. von der Heyde, A. Kitson, T. Schiphorst, and B. E. Riecke, “Gathering and applying guidelines for mobile robot design for urban search and rescue application,” in Proceedings of the International Conference on Human-Computer Interaction, Lecture Notes in Computer Science, pp. 562–581, Springer, Vancouver, Canada, July 2017. View at: Publisher Site | Google Scholar
  9. N. Miskovic, D. Nad, and I. Rendulic, “Tracking divers: an autonomous marine surface vehicle to increase diver safety,” IEEE Robotics & Automation Magazine, vol. 22, no. 3, pp. 72–84, 2015. View at: Publisher Site | Google Scholar
  10. M. L. Afakh, C. Sugianto, C. Aldian, I. K. Wibowo, and A. Risnumawan, “Towards a robust maneuvering autonomous surface vehicle,” in Proceedings of the Kontes Kapal Cepat Tak berawak Nasional, Surabaya, Indonesia, December 2016. View at: Google Scholar
  11. D. S. López, G. Moreno, J. Cordero et al., “Interoperability in a heterogeneous team of search and rescue robots,” in Search and Rescue Robotics-From Theory to Practice, IntechOpen Limited, London, UK, 2017. View at: Google Scholar
  12. A. V. Rodrigues, R. S. Carapau, M. M. Marques, V. Lobo, and F. Coito, “Unmanned systems interoperability in military maritime operations: MAVLink to STANAG 4586 bridge,” in Proceedings of the OCEANS 2017, pp. 1–5, IEEE, Aberdeen, Scotland, June 2017. View at: Google Scholar
  13. M. C. Nielsen, O. A. Eidsvik, M. Blanke, and I. Schjølberg, “Constrained multi-body dynamics for modular underwater robots - theory and experiments,” Ocean Engineering, vol. 149, pp. 358–372, 2018. View at: Publisher Site | Google Scholar
  14. C. Wong, E. Yang, X. T. Yan, and D. Gu, “An overview of robotics and autonomous systems for harsh environments,” in Proceedings of the 2017 23rd International Conference on Automation and Computing (ICAC), pp. 1–6, IEEE, Huddersfield, UK, September 2017. View at: Google Scholar
  15. D. K. Prasad, D. Rajan, L. Rachmawati, E. Rajabally, and C. Quek, “Video processing from electro-optical sensors for object detection and tracking in a maritime environment: a survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 8, pp. 1993–2016, 2017. View at: Publisher Site | Google Scholar
  16. D. Moreno-Salinas, A. Pascoal, and J. Aranda, “Optimal sensor placement for acoustic underwater target positioning with range-only measurements,” IEEE Journal of Oceanic Engineering, vol. 41, no. 3, pp. 620–643, 2016. View at: Publisher Site | Google Scholar
  17. M. Bayat, N. Crasta, A. P. Aguiar, and A. M. Pascoal, “Range-based underwater vehicle localization in the presence of unknown ocean currents: theory and experiments,” IEEE Transactions on Control Systems Technology, vol. 24, no. 1, pp. 122–139, 2015. View at: Google Scholar
  18. V. Govindarajan, S. Bhattacharya, and V. Kumar, “Human-robot collaborative topological exploration for search and rescue applications, Springer tracts in advanced robotics,” in Distributed Autonomous Robotic Systems, vol. 112, pp. 17–32, Springer, Tokyo, Japan, 2016. View at: Publisher Site | Google Scholar
  19. P. Govindhan, R. B. Kuruvilla, D. Shanmugasundar, M. Thangapandi, and G. Venkateswaran, “Human detecting aqua robot using PIR sensors,” International Journal of Engineering Science, vol. 7, pp. 6549–6553, 2017. View at: Google Scholar
  20. T. Niedzielski, M. Jurecka, B. Miziński et al., “A real-time field experiment on search and rescue operations assisted by unmanned aerial vehicles,” Journal of Field Robotics, vol. 35, no. 6, pp. 906–920, 2018. View at: Publisher Site | Google Scholar
  21. R. Cui, Y. Li, and W. Yan, “Mutual information-based multi-AUV path planning for scalar field sampling using multidimensional RRT,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 7, pp. 993–1004, 2015. View at: Google Scholar
  22. D. Moreno-Salinas, N. Crasta, M. Ribeiro, B. Bayat, A. M. Pascoal, and J. Aranda, “Integrated motion planning, control, and estimation for range-based marine vehicle positioning and target localization,” IFAC-PapersOnLine, vol. 49, no. 23, pp. 34–40, 2016. View at: Publisher Site | Google Scholar
  23. L. Liu and G. S. Sukhatme, “A solution to time-varying Markov decision processes,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1631–1638, 2018. View at: Publisher Site | Google Scholar
  24. A. Zamuda, J. D. Hernández Sosa, and L. Adler, “Constrained differential evolution optimization for underwater glider path planning in sub-mesoscale eddy sampling,” Applied Soft Computing, vol. 42, pp. 93–118, 2016. View at: Publisher Site | Google Scholar
  25. R. T. Schofield, G. A. Wilde, and R. R. Murphy, “Potential field implementation for move-to-victim behavior for a lifeguard assistant unmanned surface vehicle,” in Proceedings of the 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1-2, IEEE, Philadelphia, PA, USA, August 2018. View at: Google Scholar
  26. V. Jorge, R. Granada, R. Maidana et al., “A survey on unmanned surface vehicles for disaster robotics: main challenges and directions,” Sensors, vol. 19, no. 3, p. 702, 2019. View at: Publisher Site | Google Scholar
  27. A. Matos, E. Silva, J. Almeida et al., “Unmanned maritime systems for search and rescue,” Search and Rescue Robotics, IntechOpen, London, UK, 2017. View at: Google Scholar
  28. F. C. Teixeira, J. Quintas, and A. Pascoal, “Experimental validation of magnetic navigation of marine robotic vehiclesFunding: this research was supported in part by the European project WiMUST (GA No. 645141) and the Portuguese FCT funding program [PEst-OE/EEI/LA0009/2011]. The authors gratefully acknowledge the sponsorhip of the South Korean Agency for Defense Development under a collaborative research agreement between KAIST and IST,” IFAC-PapersOnLine, vol. 49, no. 23, pp. 273–278, 2016. View at: Publisher Site | Google Scholar
  29. H. Hajieghrary, M. A. Hsieh, and I. B. Schwartz, “Multi-agent search for source localization in a turbulent medium,” Physics Letters A, vol. 380, no. 20, pp. 1698–1705, 2016. View at: Publisher Site | Google Scholar
  30. P. Chamoso, A. González-Briones, A. Rivas, F. Bueno De Mata, and J. Corchado, “The use of drones in Spain: towards a platform for controlling UAVs in urban environments,” Sensors, vol. 18, no. 5, p. 1416, 2018. View at: Publisher Site | Google Scholar
  31. H. Balta, J. Bedkowski, S. Govindaraj et al., “Integrated data management for a fleet of search-and-rescue robots,” Journal of Field Robotics, vol. 34, no. 3, pp. 539–582, 2017. View at: Publisher Site | Google Scholar
  32. Y. Ham, K. K. Han, J. J. Lin, and M. Golparvar-Fard, “Visual monitoring of civil infrastructure systems via camera-equipped Unmanned Aerial Vehicles (UAVs): a review of related works,” Visualization in Engineering, vol. 4, no. 1, p. 1, 2016. View at: Publisher Site | Google Scholar
  33. F. J. Mesas-Carrascosa, I. Clavero Rumbao, J. Torres-Sánchez, A. García-Ferrer, J. M. Peña, and F. López Granados, “Accurate ortho-mosaicked six-band multispectral UAV images as affected by mission planning for precision agriculture proposes,” International Journal of Remote Sensing, vol. 38, no. 8-10, pp. 2161–2176, 2017. View at: Publisher Site | Google Scholar
  34. R. Petroccia, J. Alves, and G. Zappa, “JANUS-based services for operationally relevant underwater applications,” IEEE Journal of Oceanic Engineering, vol. 42, no. 4, pp. 994–1006, 2017. View at: Publisher Site | Google Scholar
  35. A. M. Hein, F. Carrara, D. R. Brumley, R. Stocker, and S. A. Levin, “Natural search algorithms as a bridge between organisms, evolution, and ecology,” Proceedings of the National Academy of Sciences, vol. 113, no. 34, pp. 9413–9420, 2016. View at: Publisher Site | Google Scholar
  36. R. Reshma, T. Ramesh, and P. Sathishkumar, “Security situational aware intelligent road traffic monitoring using UAVs,” in Proceedings of the 2016 International Conference on VLSI Systems, Architectures, Technology and Applications (VLSI-SATA), pp. 1–6, IEEE, Bengaluru, India, January 2016. View at: Google Scholar
  37. B. M. Ferreira, A. C. Matos, and J. C. Alves, “Water-jet propelled autonomous surface vehicle UCAP: system description and control,” in Proceedings of the OCEANS 2016, pp. 1–5, IEEE, Shanghai, China, April 2016. View at: Google Scholar
  38. E. Akyuz, E. Ilbahar, S. Cebi, and M. Celik, “Maritime environmental disaster management using intelligent techniques,” in Intelligence Systems in Environmental Management: Theory and Applications, Intelligent Systems Reference Library, pp. 135–155, Springer, Cham, Switzerland, 2017. View at: Publisher Site | Google Scholar
  39. L. Y. Sørensen, L. T. Jacobsen, and J. P. Hansen, “Low cost and flexible UAV deployment of sensors,” Sensors, vol. 17, no. 1, p. 154, 2017. View at: Google Scholar
  40. M. Nazarahari, E. Khanmirza, and S. Doostie, “Multi-objective multi-robot path planning in continuous environment using an enhanced genetic algorithm,” Expert Systems with Applications, vol. 115, pp. 106–120, 2019. View at: Publisher Site | Google Scholar
  41. T. T. Mac, C. Copot, D. T. Tran, and R. De Keyser, “Heuristic approaches in robot path planning: a survey,” Robotics and Autonomous Systems, vol. 86, pp. 13–28, 2016. View at: Publisher Site | Google Scholar
  42. M. N. Ab Wahab, S. Nefti-Meziani, and A. Atyabi, “A comparative review on mobile robot path planning: classical or meta-heuristic methods,” Annual Reviews in Control, vol. 50, 2020. View at: Google Scholar
  43. G. A. Wilde and R. R. Murphy, “User interface for unmanned surface vehicles used to rescue drowning victims,” in Proceedings of the 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–8, IEEE, Philadelphia, PA, USA, August 2018. View at: Google Scholar
  44. S. Gu, C. Zhou, Y. Wen et al., “A motion planning method for unmanned surface vehicle in restricted waters,” Proceedings of the Institution of Mechanical Engineers, Part M: Journal of Engineering for the Maritime Environment, vol. 234, no. 2, pp. 332–345, 2020. View at: Publisher Site | Google Scholar
  45. A. Khalifeh, K. Rajendiran, K. A. Darabkh, A. M. Khasawneh, O. AlMomani, and Z. Zinonos, “On the potential of fuzzy logic for solving the challenges of cooperative multi-robotic wireless sensor networks,” Electronics, vol. 8, no. 12, p. 1513, 2019. View at: Publisher Site | Google Scholar
  46. Z. Uddin and M. Islam, “Search and rescue system for alive human detection by semi-autonomous mobile rescue robot,” in Proceedings of the 2016 International Conference on Innovations in Science, Engineering and Technology (ICISET), pp. 1–5, IEEE, Chittagong, Bangladesh, October 2016. View at: Google Scholar
  47. T. C. Murulidhara, C. Kanagasabapthi, and S. S. Yellampalli, “Unmanned vehicle to detect alive human during calamity,” in Proceedings of the 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), pp. 84–88, IEEE, Mysuru, India, December 2017. View at: Google Scholar
  48. J. P. Queralta, J. Taipalmaa, B. Can Pullinen et al., “Collaborative multi-robot search and rescue: planning, coordination, perception, and active vision,” IEEE Access, vol. 8, pp. 191617–191643, 2020. View at: Publisher Site | Google Scholar
  49. J. P. Queralta, L. Qingqing, F. Schiano, and T. Westerlund, “VIO-UWB-based collaborative localization and dense scene reconstruction within heterogeneous multi-robot systems,” 2020, https://arxiv.org/abs/2011.00830. View at: Google Scholar
  50. C. Galarza, I. Masmitja, J. Prat, and S. Gomaríz, “Design of obstacle detection and avoidance system for Guanay II AUV,” in Proceedings of the 2016 24th Mediterranean Conference on Control and Automation (MED), pp. 410–414, IEEE, Athens, Greece, June 2016. View at: Google Scholar
  51. A. Stateczny, W. Kazimierski, P. Burdziakowski, W. Motyl, and M. Wisniewska, “Shore construction detection by automotive radar for the needs of autonomous surface vehicle navigation,” ISPRS International Journal of Geo-Information, vol. 8, no. 2, p. 80, 2019. View at: Publisher Site | Google Scholar
  52. G. Bruzzone, M. Bibuli, E. Zereik, A. Ranieri, and M. Caccia, “Cooperative adaptive guidance and control paradigm for marine robots in an emergency ship towing scenario,” International Journal of Adaptive Control and Signal Processing, vol. 31, no. 4, pp. 562–580, 2017. View at: Publisher Site | Google Scholar
  53. J. Fan, Y. Li, Y. Liao et al., “Second path planning for unmanned surface vehicle considering the constraint of motion performance,” Journal of Marine Science and Engineering, vol. 7, no. 4, p. 104, 2019. View at: Publisher Site | Google Scholar
  54. Y. Singh, S. Sharma, R. Sutton, D. Hatton, and A. Khan, “Feasibility study of a constrained Dijkstra approach for optimal path planning of an unmanned surface vehicle in a dynamic maritime environment,” in Proceedings of the 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 117–122, IEEE, Torres Vedras, Portugal, May 2018. View at: Google Scholar
  55. S. Biswas, S. G. Anavatti, and M. A. Garratt, “Nearest neighbour based task allocation with multi-agent path planning in dynamic environments,” in Proceedings of the 2017 International Conference on Advanced Mechatronics, Intelligent Manufacture, and Industrial Automation (ICAMIMIA), pp. 181–186, IEEE, Surabaya, Indonasia, October 2017. View at: Google Scholar
  56. B. Bayat, N. Crasta, H. Li, and A. Ijspeert, “Optimal search strategies for pollutant source localization,” in Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1801–1807, IEEE, Daejeon, South Korea, October 2016. View at: Google Scholar
  57. J. M. Soares, A. P. Aguiar, A. M. Pascoal, and A. Martinoli, “An algorithm for formation-based chemical plume tracing using robotic marine vehicles,” in Proceedings of the OCEANS 2016 MTS/IEEE Monterey, pp. 1–8, IEEE, Monterey, CA, USA, September 2016. View at: Google Scholar
  58. W. J. Thrift, A. Cabuslay, A. B. Laird, S. Ranjbar, A. I. Hochbaum, and R. Ragan, “Surface-enhanced Raman scattering-based odor compass: locating multiple chemical sources and pathogens,” ACS Sensors, vol. 4, no. 9, pp. 2311–2319, 2019. View at: Publisher Site | Google Scholar
  59. S. Siyang and T. Kerdcharoen, “Development of unmanned surface vehicle for smart water quality inspector,” in Proceedings of the 2016 13th International Conference on Electrical Engineering/electronics, Computer, Telecommunications and Information Technology (ECTI-CON), pp. 1–5, IEEE, Chiang Mai, Thailand, June-July 2016. View at: Google Scholar
  60. B. Bayat, N. Crasta, A. Crespi, A. M. Pascoal, and A. Ijspeert, “Environmental monitoring using autonomous vehicles: a survey of recent searching techniques,” Current Opinion in Biotechnology, vol. 45, pp. 76–84, 2017. View at: Publisher Site | Google Scholar
  61. Z. Li, K. Gavrilyuk, E. Gavves, M. Jain, and C. G. M. Snoek, “Videolstm convolves, attends and flows for action recognition,” Computer Vision and Image Understanding, vol. 166, pp. 41–50, 2018. View at: Publisher Site | Google Scholar
  62. K. B. Bhangale, K. M. Jadhav, and Y. R. Shirke, “Robust pose invariant face recognition using DCP and LBP,” International Journal of Management, Technology and Engineering, vol. 8, no. 9, pp. 1026–1034, 2018. View at: Google Scholar
  63. P. Kamencay, M. Benčo, T. Miždoš, and R. Radil, “A new method for face recognition using convolutional neural network,” Advances in Electrical and Electronic Engineering, vol. 15, no. 4, 2017. View at: Publisher Site | Google Scholar
  64. J. P. Queralta, J. Taipalmaa, B. C. Pullinen et al., “Collaborative multi-robot systems for search and rescue: coordination and perception,” 2020, https://arxiv.org/abs/2008.12610. View at: Google Scholar
  65. W. Zhao, J. P. Queralta, and T. Westerlund, “Sim-to-real transfer in deep reinforcement learning for robotics: a survey,” 2020, https://arxiv.org/abs/2009.13303. View at: Google Scholar
  66. J. Sun, B. Li, Y. Jiang, and C.-y. Wen, “A camera-based target detection and positioning UAV system for search and rescue (SAR) purposes,” Sensors, vol. 16, no. 11, p. 1778, 2016. View at: Publisher Site | Google Scholar
  67. G. D. Cubber, D. Doroftei, K. Rudin et al., Introduction to the Use of Robotic Tools for Search and Rescue, IntechOpen Limited, London, UK, 2017.
  68. E. Lygouras, N. Santavas, A. Taitzoglou, K. Tarchanidis, A. Mitropoulos, and A. Gasteratos, “Unsupervised human detection with an embedded vision system on a fully autonomous uav for search and rescue operations,” Sensors, vol. 19, no. 16, p. 3542, 2019. View at: Publisher Site | Google Scholar
  69. A. Mejía, D. Marcillo, M. Guaño, and T. Gualotuña, “Serverless based control and monitoring for search and rescue robots,” in Proceedings of the 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–6, IEEE, Sevilla, Spain, June 2020. View at: Google Scholar
  70. B. El Mahrad, A. Newton, J. Icely, I. Kacimi, S. Abalansa, and M. Snoussi, “Contribution of remote sensing technologies to a holistic coastal and marine environmental management framework: a review,” Remote Sensing, vol. 12, no. 14, p. 2313, 2020. View at: Publisher Site | Google Scholar
  71. M. Paravisi, D. H. Santos, V. Jorge, G. Heck, L. Gonçalves, and A. Amory, “Unmanned surface vehicle simulator with realistic environmental disturbances,” Sensors, vol. 19, no. 5, p. 1068, 2019. View at: Publisher Site | Google Scholar
  72. D. D. Gaffin and C. M. Curry, “Arachnid navigation - a review of classic and emerging models,” The Journal of Arachnology, vol. 48, no. 1, pp. 1–25, 2020. View at: Publisher Site | Google Scholar
  73. V. Sophias, “Robots in crisis management: a survey,” in Proceedings of the Information Systems for Crisis Response and Management in Mediterranean Countries: 4th International Conference, ISCRAM-med 2017, vol. 301, p. 43, Springer, Xanthi, Greece, October 2017. View at: Google Scholar
  74. R. Bogue, “Disaster relief, and search and rescue robots: the way forward,” Industrial Robot: The International Journal of Robotics Research and Application, vol. 46, no. 2, pp. 181–187, 2019. View at: Publisher Site | Google Scholar
  75. J. Delmerico, S. Mintchev, A. Giusti et al., “The current state and future outlook of rescue robotics,” Journal of Field Robotics, vol. 36, no. 7, pp. 1171–1191, 2019. View at: Publisher Site | Google Scholar
  76. L. Del Rosario, J. J. Ramirez, L. Romero, K. Ortiz, A. Ocasio, and E. I. O. Rivera, “U-WaVe: unmanned water vehicle for costal surveillance and search and rescue: undergraduate research experience,” in Proceedings of the 2018 IEEE International Symposium on Technologies for Homeland Security (HST), pp. 1–7, IEEE, Crystal City, VA, USA, May 2018. View at: Google Scholar
  77. E. Simetti and G. Casalino, “Manipulation and transportation with cooperative underwater vehicle manipulator systems,” IEEE Journal of Oceanic Engineering, vol. 42, no. 4, pp. 782–799, 2016. View at: Google Scholar
  78. E. Simetti, F. Wanderlingh, S. Torelli, M. Bibuli, A. Odetti et al., “Autonomous underwater intervention: experimental results of the MARIS project,” IEEE Journal of Oceanic Engineering, vol. 43, no. 3, pp. 620–639, 2017. View at: Google Scholar
  79. G. Casalino, M. Caccia, S. Caselli et al., “Underwater intervention robotics: an outline of the Italian national project MARIS,” Marine Technology Society Journal, vol. 50, no. 4, pp. 98–107, 2016. View at: Publisher Site | Google Scholar
  80. N. Palomeras, A. Peñalver, M. Massot-Campos et al., “I-AUV docking and panel intervention at sea,” Sensors, vol. 16, no. 10, p. 1673, 2016. View at: Publisher Site | Google Scholar
  81. Z. Yan, X. Liu, J. Zhou, and D. Wu, “Coordinated target tracking strategy for multiple unmanned underwater vehicles with time delays,” IEEE Access, vol. 6, pp. 10348–10357, 2018. View at: Publisher Site | Google Scholar
  82. B. Sun, D. Zhu, C. Tian, and C. Luo, “Complete coverage autonomous underwater vehicles path planning based on glasius bio-inspired neural network algorithm for discrete and centralized programming,” IEEE Transactions on Cognitive and Developmental Systems, vol. 11, no. 1, pp. 73–84, 2018. View at: Google Scholar
  83. R. Bravo and A. Leiras, “Literature review of the application of UAVs in humanitarian relief,” in Proceedings of the XXXV Encontro Nacional de Engenharia de Producao, pp. 13–16, Fortaleza, Brazil, October 2015. View at: Google Scholar
  84. Y. Zhu, “A multi-AUV searching algorithm based on neuron network with obstacle,” in Proceedings of the 2019 3rd International Symposium on Autonomous Systems (ISAS), pp. 131–136, IEEE, Shanghai, China, May 2019. View at: Google Scholar
  85. J. G. Baylog and T. A. Wettergren, “A ROC-Based approach for developing optimal strategies in UUV search planning,” IEEE Journal of Oceanic Engineering, vol. 43, no. 4, pp. 843–855, 2017. View at: Google Scholar
  86. Z. Lichuan, F. Jingxiang, W. Tonghao, G. Jian, and Z. Ru, “A new algorithm for collaborative navigation without time synchronization of multi-UUVS,” in Proceedings of the OCEANS 2017, pp. 1–6, IEEE, Aberdeen, Scotland, June 2017. View at: Google Scholar
  87. J. González-García, A. Gómez-Espinosa, E. Cuan-Urquizo, L. G. García-Valdovinos, T. Salgado-Jiménez, and J. A. E. Cabello, “Autonomous underwater vehicles: localization, navigation, and communication for collaborative missions,” Applied Sciences, vol. 10, no. 4, p. 1256, 2020. View at: Publisher Site | Google Scholar
  88. A. Abad, N. DiLeo, and K. Fregene, “Decentralized model predictive control for UUV collaborative missions,” in Proceedings of the OCEANS 2017, pp. 1–6, IEEE, Anchorage, Alaska, September 2017. View at: Google Scholar
  89. A. Rozantsev, V. Lepetit, and P. Fua, “Detecting flying objects using a single moving camera,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 5, pp. 879–892, 2016. View at: Google Scholar
  90. A. Rivas, P. Chamoso, A. González-Briones, and J. Corchado, “Detection of cattle using drones and convolutional neural networks,” Sensors, vol. 18, no. 7, p. 2048, 2018. View at: Publisher Site | Google Scholar
  91. R. Jin, J. Jiang, Y. Qi, D. Lin, and T. Song, “Drone detection and pose estimation using relational graph networks,” Sensors, vol. 19, no. 6, p. 1479, 2019. View at: Publisher Site | Google Scholar
  92. D. Moura, L. Guardalben, M. Luis, and S. Sargento, “A drone-quality delay tolerant routing approach for aquatic drones’ scenarios,” in Proceedings of the 2017 IEEE Globecom Workshops (GC Wkshps), pp. 1–7, IEEE, Singapore, December 2017. View at: Google Scholar
  93. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, Las Vegas, NV, USA, June-July 2016. View at: Google Scholar
  94. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788, Las Vegas, NV, USA, June-July 2016. View at: Google Scholar
  95. M. A. Ab Aziz, M. F. Abas, A. A. N. Faudzi, N. M. Saad, and A. Irawan, “Development of wireless passive water quality catchment monitoring system,” Journal of Telecommunication, Electronic and Computer Engineering, vol. 10, no. 1–3, 2018. View at: Google Scholar
  96. D. Enriquez, S. Jenson, A. Bautista et al., “On software-based remote vehicle monitoring for detection and mapping of slippery road sections,” International Journal of Intelligent Transportation Systems Research, vol. 15, no. 3, pp. 141–154, 2017. View at: Publisher Site | Google Scholar
  97. A. T. J. Ong and Y. C. Wei, “Design and development of aircraft tracking system,” in Proceedings of the 2015 IEEE Student Conference on Research and Development (SCOReD), pp. 117–122, IEEE, Kuala Lumpur, Malaysia, December 2015. View at: Google Scholar
  98. H. Yaling, “The design of monitoring system based on GPRS,” in Proceedings of the 2016 International Conference on Robots & Intelligent System (ICRIS), pp. 432–435, IEEE, Zhangjiajie, China, August 2016. View at: Google Scholar
  99. A. Zolich, D. Palma, K. Kansanen et al., “Survey on communication and networks for autonomous marine systems,” Journal of Intelligent & Robotic Systems, vol. 95, no. 3-4, pp. 789–813, 2019. View at: Publisher Site | Google Scholar
  100. P. Suja and S. Tripathi, “Real-time emotion recognition from facial images using Raspberry Pi II,” in Proceedings of the 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN), pp. 666–670, IEEE, Noida, India, February 2016. View at: Google Scholar
  101. S. Goyal, A. Rani, and V. Singh, “An improved local binary pattern-based edge detection algorithm for noisy images,” Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 2043–2054, 2019. View at: Google Scholar
  102. W. Y. Lu and Y. A. N. G. Ming, “Face detection based on Viola-Jones algorithm applying composite features,” in Proceedings of the 2019 International Conference on Robots & Intelligent System (ICRIS), pp. 82–85, IEEE, Haikou, China, June 2019. View at: Google Scholar
  103. C. Zhang, X. Xu, and D. Tu, “Face detection using improved faster RCNN,” 2018, https://arxiv.org/abs/1802.02142. View at: Google Scholar
  104. K. Sukvichai, K. Wongsuwan, N. Kaewnark, and P. Wisanuvej, “Implementation of visual odometry estimation for underwater robot on ROS by using RaspberryPi 2,” in Proceedings of the 2016 International Conference on Electronics, Information, and Communications (ICEIC), pp. 1–4, IEEE, Danang, Vietnam, January 2016. View at: Google Scholar
  105. W. Liu, Y. Liu, and R. Bucknall, “A robust localization method for unmanned surface vehicle (USV) navigation using fuzzy adaptive Kalman filtering,” IEEE Access, vol. 7, pp. 46071–46083, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Vanita Jain et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1101
Downloads527
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.