Research Article | Open Access
Sara Pérez-Carabaza, Eva Besada-Portas, José Antonio Lopez-Orozco, Gonzalo Pajares, "Minimum Time Search in Real-World Scenarios Using Multiple UAVs with Onboard Orientable Cameras", Journal of Sensors, vol. 2019, Article ID 7673859, 22 pages, 2019. https://doi.org/10.1155/2019/7673859
Minimum Time Search in Real-World Scenarios Using Multiple UAVs with Onboard Orientable Cameras
This paper proposes a new evolutionary planner to determine the trajectories of several Unmanned Aerial Vehicles (UAVs) and the scan direction of their cameras for minimizing the expected detection time of a nondeterministically moving target of uncertain initial location. To achieve this, the planner can reorient the UAVs cameras and modify the UAVs heading, speed, and height with the purpose of making the UAV reach and the camera observe faster the areas with high probability of target presence. Besides, the planner uses a digital elevation model of the search region to capture its influence on the camera likelihood (changing the footprint dimensions and the probability of detection) and to help the operator to construct the initial belief of target presence and target motion model. The planner also lets the operator include intelligence information in the initial target belief and motion model, in order to let him/her model real-world scenarios systematically. All these characteristics let the planner adapt the UAV trajectories and sensor poses to the requirements of minimum time search operations over real-world scenarios, as the results of the paper, obtained over 3 scenarios built with the modeling aid-tools of the planner, show.
The strong research interest in UAVs trajectory planning takes advantage of the UAVs capabilities to perform different types of military and civil missions, such as georeference , wildlife monitoring , or target tracking . Moreover, the current developments of their onboard sensorial systems make them also ideal for performing risky long-endurance reconnaissance and surveillance operations. This work focuses on a type of Probabilistic Target Search Problem (PTSP), named minimum time search (MTS) , since reducing the time required by the UAVs and their onboard sensors to detect the target is a critical objective of the mission. The selected problem has several applications that include looking for survivors after natural disasters (e.g., after fires or earthquakes), search and rescue operations, or searching for military targets.
Approaches that tackle PTSPs determine the trajectories of the UAVs in spite of the uncertainty associated with the target location and sensor capabilities. To do this, they probabilistically model the different uncertainty sources. In more detail, the information about the target position is modeled with an initial probability map (with the belief of target presence within the search area) and a probabilistic motion model, while the sensors uncertainty is modeled with detection likelihood functions. As an example, Figure 1 schematizes a PTSP where two UAVs equipped with electrooptic sensors (cameras) look for a static lost off-road vehicle in the mountains. The colored map at the bottom shows the altitude of the search area (with green being associated with valleys and brown with mountains), the grey shadowed map in the middle displays the target initial belief (darker areas, associated with lower altitudes, indicate a higher probability of target presence), and the blue/red polygons, respectively, show the current observed areas (within the cameras footprints) by the onboard sensor of each UAV. When the PTSP is also a MTS operation, the regions with higher target probabilities (darker grey areas) should be observed (falling within the cameras footprints) as soon as possible in order to minimize the target detection time.
One of the main goals of MTS planners is to reduce the target detection time, which can be achieved by optimizing the expected time of target detection [5–10]. Other PTSP approaches optimize alternative criteria, such as maximizing the probability of target detection [11–14] or minimizing its counterpart probability of nondetection [15, 16], maximizing the information gain , minimizing the system entropy , minimizing its uncertainty (areas with intermediate belief of target presence) , or optimizing normalized or discounted versions of the previous criteria [4, 20–22]. A common characteristic of the different approaches is that, in scenarios with bounded resources (e.g., limited flying time or fuel), they often obtain better results than predefined search patterns (e.g., spiral, lawnmower), as they adapt the UAV trajectories to the scenario specific target initial belief and motion [6, 20]. Besides, although the approaches that optimize the previous PTSP criteria can share the same elements and probabilistic models, MTS distinctiveness is the extreme influence of the visiting order of the high probability regions in the expected time of target detection, as prioritizing flying over high probability areas first increases the chances of finding the target earlier .
The NP-hard complexity of PTSP  is tackled with suboptimal algorithms and heuristics, such as gradient-based approaches [13, 15–17, 19], greedy methods [8, 12, 20], cross-entropy optimization [4, 7], Bayesian optimization algorithms , ant colony optimization [6, 9], or genetic algorithms . Besides, streamlined formulations of the problem are typically accepted in order to further simplify the problem complexity. They range from considering static targets [8, 10, 13, 15, 16, 19, 20] instead of dynamic ones [4–7, 9, 11, 12, 14, 17, 21, 22] to modeling the sensors ideally [4–6, 8] instead of realistically (e.g., as radars [9–12] or downward-looking cameras [13, 14, 17, 19]) or to assuming that the UAVs fly following straight-lines according to the eight cardinal directions [4–8] or optimized waypoints [17, 21, 22] instead of considering the physical maneuverability constraints induced by the dynamic models of the UAVs [9–16, 19, 20]. Additionally, in some cases (e.g., in [10, 13, 16, 17, 19, 20]) the approach uses a receding horizon controller to divide the UAVs trajectory into sequentially optimized sections, narrowing the optimization search space at the expense of constructing suboptimal myopic solutions. Finally, it is worth noting that the approaches are often tested over synthetic scenarios, built by the authors, without a clear relation with a real-world problem.
The previous simplifications and the lack of analysis over real-world scenarios reduce the applicability of the approaches to real-world problems. This is especially relevant within the subset of MTS approaches [4–10], as the majority of them have been developed for UAVs flying accordingly to the eight cardinal directions [4–8] or only tested over ideal sensors and synthetic unreal scenarios [4–6]. To pave the path of using MTS methods for real-world scenarios, this work extends the capabilities of the planner introduced in  by contributing to the following fields:(i)MTS mission definition, by combining intelligence and geographical information to construct the target probability models that will be used in real-world missions. This feature is inspired by software tools that use geographical information to build the target initial belief or motion model [24–26], monitor search missions [26, 27], or evaluate predefined search patterns [24, 28]. Its main benefit, as the results of this paper will show, will be the substitution of the synthetic scenarios used in previous MTS planners by real-world inspired ones.(ii)MTS planning, by optimizing simultaneously the UAV trajectory (by means of changing the UAV heading, speed, and height) and camera pose (azimuth and elevation). This new competence combines the UAV trajectory optimization capabilities of previous MTS planners (which usually manipulate only the UAV heading) with the sensor moving capabilities of only a few PTSP approaches [21, 22]. As this innovation supports a higher moving capability of the camera footprint and a quicker coverage of the high probability areas, it is especially relevant for MTS where the target has to be detected as soon as possible.(iii)Sensor characterization, by incrementing the realism of the camera detection behavior, modeling the effects of the terrain elevation (occlusions and target-camera distance variation), camera orientation, and target and sensor size in the footprint dimensions and sensor likelihood. Not only is the realism of the likelihood model crucial to shorten the differences between the simulated and real behavior of the camera, but it will also be regarded during the mission definition presented in the Results section in order to set up the scenarios correctly.
In short, this paper presents a new planner that optimizes the UAV trajectories and onboard cameras orientations in real-world dynamic MTS scenarios, where the behavior of a dynamic target is probabilistically modeled from a novel perspective by combining intelligence and geographic information, and whose new camera likelihood model takes into account the effects of the terrain elevation in the camera detection capabilities. Besides, we extend the capabilities of the MTS planner presented in , which only manipulated the trajectory heading of a set of UAVs equipped with fixed radars in order to optimize the detection time of static targets in synthetically built scenarios, with the simultaneous optimization of UAVs trajectories and sensor poses over realistically built scenarios with dynamic targets. Moreover, the new planner also considers the extension to heterogeneous UAVs, which can have different parametrizations and start and leave the search mission at different times. Finally, this paper describes in detail, and in an algorithmic form, the functionality of the new planner with the purpose of clarifying the interaction between the different elements during the optimization process.
The remaining of this paper is organized as follows. The second section introduces the probabilistic formulation of the MTS problem, describes the novel approaches used to model the target initial state and motion behavior, and presents the new models used for the UAVs and their cameras. The third section details the new multistepped MTS planner presented in this paper, introducing its steps in an algorithmic form and analyzing its computational complexity. The fourth section compares the state-of-the-art of the closest related work with our new planner and highlights its differences with the previous planner in . The fifth section analyzes its performance over three different real search scenarios and shows the benefits of letting the planner reorient the camera. And in the ending section, the conclusions are drawn and some open research questions discussed.
2. Minimum Time Search Definition
This section presents the probabilistic formulation of the MTS problem and describes the approaches used in this work to model the target, UAV, and sensor behavior.
2.1. General Problem Formulation
In the MTS problem presented in this paper, there is a set of UAVs overflying a search area (discretized into a grid of square cells) in order to detect the presence of a single target located within it. Besides, due to the uncertainty associated with the position and dynamics of the target and to the measurements of the onboard sensor of each UAV, the MTS problem is formulated within the probabilistic framework introduced in .
In more detail, the information about the target initial position (represented by random variable ) is modeled with the initial target belief , which is a probability mass function that represents the chances of finding the target at each . Besides, when the target is moving, the information about its dynamics is described with the target Markovian motion model , which states the probability that the target moves from any cell to any cell during a time lapse . Moreover, the detection capabilities of the onboard sensor of each UAV are described with the likelihood function , which returns the probability of detecting () or not detecting () a target placed at from the UAV and sensor pose stored in the state variable of the -th UAV.
For those readers familiarized with the Recursive Bayesian Filter (RBF, ), the previous probability models , , and are used to obtain the target belief , which represents the chances of finding the target at time step and at each cell , given the trajectory of the UAVs and sensors poses and the sensor measurements . The RBF process, stated in (1), calculates the current belief from the previous time step belief by (1) incorporating the current UAVs location and sensor poses and the last sensor measurements with the likelihood functions , and by (2) displacing and redistributing the target location probability with the target motion model . Besides, (1) assumes that the sensors provide measurements only at time steps that are multiple of and its normalization factor is used to ensure that .
A MTS mission can be formulated as an optimization problem, where some criteria related to the previous probabilistic models and other mission objectives are evaluated in order to determine the best UAV trajectories and sensor poses during the duration of the mission . A useful criterion for MTS is the Expected Time of Detection (ETD, [5–7, 9, 10]), which measures the average time of detection of the target when the UAVs and the sensors follow the trajectories and poses defined in a given . It is calculable with (2)-(4), setting to for the initial case (). The first equation is similar to RBF equation (1), but as it lacks the normalization term and all measurements are nondetection, it obtains the “unnormalized belief” assuming that the onboard sensors do not detect the target from the UAVs trajectories and sensor poses in . The second one obtains , which is the probability of not detecting the target from and whose value decays as the sensors make nondetection observations in regions with probability of target presence. The third expression obtains the ETD from when becomes for some and underestimates its value when . Finally, it is worth noting that minimizing is a better option for MTS than maximizing the probability of detection along the whole trajectory , as the MTS objective is to collect as much probability as possible sooner [4–6].
Besides, in order ensure that the UAV trajectories and sensor poses calculated by our MTS planner are feasible from the maneuverability point of view, we exploit the UAV and sensor deterministic dynamic model of each UAV-sensor pair , where stands for the set of control actions (e.g., commanded UAV heading, height & speed, and sensor elevation & azimuth) and for the time derivative of the UAV location and sensor poses. In particular, our MTS planner uses this dynamic model to obtain the solutions (best UAV trajectories and sensor poses ) from the initial UAVs locations and the sequence of sets of control actions proposed, manipulated, and evaluated by our approach in order to optimize MTS missions. Finally, it is worth highlighting that due to the deterministic nature of the UAV and sensor dynamic model, there is an unambiguous relation between the best sequence of control actions and the best UAVs trajectories and sensor poses.
The realism of the four models (i.e., of the initial target belief , of the target motion model , of the sensor likelihood , and of the UAV and sensor motion model ) is crucial to avoid discordances between the real ETD of a given and the ETD calculated by our approach. Hence, in the rest of this section we present the new models proposed in this paper to bring realism to the definition and solution of MTS missions performed by fixed-wing UAVs equipped with orientable cameras.
2.2. Initial Target Belief Definition
To construct the initial target belief , we merge knowledge coming from different sources, using a different probability layer for each information source and performing with (5) an addition of the probability layers weighted with their reliability/importance coefficients . In other words, considering the first term in (5) is a normalization coefficient that ensures that is a probability function (i.e., ), our initial target belief is calculated as the mixture of the beliefs associated with the different -th information sources.
The layers can be associated with geographical information (e.g., terrain altitude, road maps) or intelligence/user-defined clues (e.g., last/habitual location areas of the target). For the examples of this work, we consider the following two layers:(1)The terrain elevation probability layer , obtained with the following steps. First the digital elevation model (DEM, ) of is automatically resampled to the size of the cells in in order to obtain the average height of each cell in . Next the user/operator is required to divide the existing elevations within the cells in into consecutive ranges and to assign a chance of target presence to each range. Finally, the method automatically determines the cells in each elevation range and the probability associated with all , distributing the chances of target presence assigned by the operator. As this way of proceeding generates a geographical probability layer where areas with similar altitude share the same initial belief, it automatically spreads uniformly the belief over different regions of the search area.(2)The intelligence probability layer . For this layer the operator has to graphically define a mixture (weighted addition) of Gaussians (centered in eligible locations of the search area and spread according to selectable standard deviations) and of polygonal areas (defined by their external points placed in the desired locations of the search area ) with uniform probabilities (assigned by the operator). The weights of each element (each Gaussian and/or each polygonal area) within the intelligence probability layer are also selected by the operator according to the information gathered about the last known location of the target.
As an illustrative example, we show the initial belief defined by an operator when looking for a drifting boat next to the cost in Areia Branca, Brazil. To obtain the probability elevation layer shown at the bottom of Figure 2(b), where darker/lighter greys are associated with higher/lower probability cells, the operator analyzes the elevation map represented in Figure 2(a) and assigns high chances of target presence to the elevations by the coast (represented in dark green in the elevation map) to model a strong belief that the boat may have arrived a ground when the search mission starts, lower chances to the sea-level elevation (in blue) to model a moderate belief that the target is still navigating, and zero chances to higher inland elevations. In order to define the intelligence probability layer , represented at the top of Figure 2(b), the operator centers a moderate spread Gaussian in the last known position of the boat. Finally, the operator assigns a weight for each layer ( and ) in order to produce the initial target belief shown in Figure 2(c).
(a) Elevation map (m)
(b) Probability layers ( at bottom & at top)
(c) Initial belief
2.3. Target Motion Model Definition
To define that target motion model, we can distinguish two cases:(1)Scenarios with static targets. In this case, and due to the immobility of the target, if and only if and refer to the same cell in , and otherwise. Besides, in this case, the equations that contain the term can be simplified as .(2)Scenarios with dynamic targets. To construct this type of target motion model, we combine different types of information (e.g., elevation data and sea currents). In this work, the operator has to provide the following:(i)The elevation range where the target is allowed to move within . This allows the operator to indirectly assign the probabilities corresponding to the static target behavior to those cells that do not belong to the . In other words, , the target motion definition process automatically makes if and otherwise.(ii) vectors , each with 9 values that represent, for a few selected cells , the chances of moving to their 8 neighbor cells and of staying at the same cell . Besides, in order to force the target to stay within the search area , some of the 9 values of the vectors that correspond to cells in the borders of are automatically set to zero. With this information, the target motion definition process makes for all and computes applying the potential field method (according to the cardinal directions that make the target move from one cell to its neighbors) for all and . In more detail, if with stands for the adjacent cell to in the -th cardinal direction, represents stay in the same cell, is the distance between cell and cell , and is the -th element of , then and . This way of proceeding allows the operator to define the distribution of the target probability around the neighborhood cells in only a few cells and extend this definition to the remaining moving cells taking into account the distance between the selected cells and the others.(iii)The time period that has to be used to apply to the belief . As the speed of the target is related to the quotient of the size of the cell and , this value is used by the planner to relate the simulation of the motion of the target to the simulation of the motion of the UAVs and sensors.
As an illustrative example, we also obtain the target motion model of the boat drifting example next to the cost in Areia Branca, Brazil. In this case, the operator defines for three cells in the sea, making , for one of the cardinal directions towards the coast, and for the remaining directions . Figure 3 represents with the starting point of the red arrows the location of the selected and with the arrows themselves the direction where for each . Besides, the operator also makes the moving to only let the target (i.e., the drifting boat) move in the sea. To summarize the result of the target motion definition process, the green arrows in Figure 3 represent the direction obtained by weighting, for each cell , the arrows in the 8-cardinal directions with . Besides, the lack of inland green arrows indicates the static behavior of the target in this area. In short, the green arrows show how the defined target motion makes the belief over the sea move towards the coast and remain unchanged inland. Finally, it is worth highlighting that, thanks to our target motion model definition approach, this complex model is automatically built just considering the elevation range () and the target movement probabilities in three cells of .
2.4. Camera Likelihood with Terrain Occlusion
The likelihood of detecting the target with the onboard camera of each UAV in a given cell can be calculated scaling the Target Task Performance (TTP) metric  with the percentage of the selected cell within the camera footprint. This scaling, used to reduce the detection probability in the cells of the footprint border, can be modeled with (6), where represents the area (size of the surface) of the common region of the cell and of the footprint of the camera (oriented and placed according to ), is the total area of cell , and is the target task performance function.
To determine the footprint of the camera from the UAV location and camera pose in , we follow the process schematized in Figure 4(a) and consisting of the following two steps. First, we calculate geometrically the camera footprint at sea level taking into account the following pieces of information within : UAV location () and camera azimuth, elevation, and FOV (). To do this, we consider that the sensor location is the same as the UAV location (since its deviation from the location of the UAV mass center is negligible for the mission) and that its orientations are measured with respect to the vehicle coordinate frame (since the camera gimbal compensates the UAV attitude). Second, we approximate the corners of the real camera footprint by the intersections (obtained using the 3D Bresenham algorithm ) of the terrain elevation with the 4 lines that join the camera with the 4 corners of the sea-level camera footprint.
(a) Footprint determination
(b) TTPF curve vs.
(c) TTPF curves vs. relative height (for )
The value of the target task performance function TTPF for each cell within the sensor footprint from the UAV location and sensor pose in is calculated with (7), where is the cycle size of the target and (according to [32, 33]) is the critical cycle size for detecting a small target with a likelihood of 50% (since when , ). Moreover, the target cycle size is calculated with (8), where is the real size of the target, is the angle between the along-scan (vertical) and cross-scan (horizontal) directions of the footprint, and (with ) is the ground sample distance (corresponding size of a pixel of the camera over terrain, obtainable as the ratio of the length of the footprint in each direction to its corresponding number of pixels). Taking into account the trigonometric relations between the UAV location and camera footprint, the angle can be obtained solving (9) and the ground sample distance in each scan direction can be calculated with (10) and (11), where is the height of the terrain at the target location and is the number of pixels of the camera in the vertical or horizontal directions ( ).
As an illustrative example of the likelihood model, Figure 4(b) shows how the probability of detection, proportional to TTPF, is incremented as the cycle size of the target grows. Hence, and due to (8)-(11), the probability of detection grows as the target size is increased, as the UAV flying altitude with respect to the terrain elevation gets smaller (through ), as the FOV of the camera is reduced (through and ), or as the camera elevation gets closer to the vertical one (through and ). In more detail, Figure 4(c) shows several curves, corresponding to different and values (indicated in the legend), versus the UAV flying altitude with respect to the terrain elevation (i.e., ) for a target of (e.g., the boat in Areia Branca).
Finally, note that the camera likelihood function defined by (6)-(10) is able to observe partially the borders of the footprints and provide different values for targets of different sizes; cells observed from different UAV heights, camera azimuth, and elevation angles; etc. Hence, our model does not nullify the likelihood that those cells are partially within the footprint but their center falls outside the footprint as in [17, 19], and allows the planner to move the camera from the downward-looking pose assumed in other camera models tested in probabilistic target search problems [13, 14, 17].
2.5. UAV and Sensor Dynamic Models
The UAV dynamics corresponds to a fixed-wing UAV modeled with the upper part of the nonlinear parametrizable Simulink model represented in Figure 5 or the differential equations of the appendix. The UAV motion variables within , highlighted in light green on the right of the figure, are the UAV 3D location , 3D speeds , heading (), course angle (), air and ground velocity ( and ), and fuel consumption (). The UAV motion variables within the command , highlighted in cyan on the left, are the commanded UAV velocity (), heading (), and height (). Additional inputs to the UAV dynamical model are the wind speed () and direction (), highlighted on the left in pink. Besides, in order to let the reader identify the UAV dynamics within the Simulink model, the blocks associated with its height dynamics are colored in blue, with its velocity in grey, with its lateral displacement in white, with the wind in green, and with the fuel in magenta.
The sensor dynamic model is represented at the bottom of Figure 5 and corresponds to a gimbaled camera whose scan direction can be changed commanding its elevation () and azimuth () angles, with the model inputs highlighted in red on the left of the figure. Besides, the camera motion variables within are the outputs of the model highlighted in yellow on the right (camera elevation and azimuth ) and the blocks of the camera dynamics are colored in orange.
The whole model includes the usual limitations related to the UAV speed, height, and heading and to the camera elevation and azimuth. Its different parameters (provided in an external input file, eligible by the operator) allow adapting the movement of the UAV and of the camera to the behavior of different real-world aircraft and camera gimbals. Finally, the model integration, performed from each UAV and camera initial state using the values of a sequence of commands , allows obtaining the with a given simulation resolution time step .
3. MTS Planner
The MTS planner presented in this work is an extension of the Genetic Algorithm (GA) planner introduced in . This section details the main elements of the new planner, introduces its algorithmic description following a top-down strategy, and discusses its computational cost.
3.1. Multistepped GA-Based Planner
The MTS planner obtains the UAVs trajectories for a given mission time following a receding horizon controller approach [10, 13, 16, 17, 19, 20]. More concretely, the planner divides into different sections of duration and optimizes each section ( with ) sequentially, using a GA whose inputs are the final state of the last section and its “unnormalized belief” , and whose outputs are the new and . Besides, it incorporates the possibility of letting each UAV engage and leave the MTS mission at different time instants and (without loss of generality, we assume that for at least one UAV, ) and of using different time steps for the measures of each UAV and for the target update . Hence, to obtain sequentially the sections of the UAV trajectories (and of the cameras orientations), the new planner obtains the overall mission ending time, calculates the number of sections required for the mission, and considers the UAVs and camera immobile and disabled for those time steps where they are not engaged in the MTS mission (i.e., and ). Besides, it also obtains the minimum time step required to update the “unnormalized belief” during the evaluation of the ETD taking into account the time steps of target motion model and of the camera of each UAV .
The algorithmic description of the new multistepped planner is sketched in the pseudocode of Algorithm 1. It starts computing the mission duration , the number of sections into which will be divided, and the time lapse required to update the “unnormalized belief” . Next, it initializes the “unnormalized belief” and the fitness criteria vector , which stores the different values of the criteria used by the GA to decide if a solution (proposal of UAV trajectories and sensor poses in ) is better than another. Finally, it sequentially optimizes each of the sections of the trajectory (i.e., each with and ) using the optimizer in , which will be described in detail in the following section.
|Require: ⊳Structure with scenario models and times: , , , , ,|
|Require: ⊳Structure with optimizer parameters|
|Require: ⊳Initial UAVs locations and sensor poses|
|Require: ⊳ Duration of each subsequence|
|1: ⊳ Determine the duration of the mission|
|2: ⊳ Determine the number of sections required for the multi-stepped planner|
|3: ⊳ Round the duration of the mission|
|4: ⊳ Determine minimum time step to update .|
|5: ⊳ Set starting time|
|6: ⊳ Initialize|
|7: ⊳ Initialize fitness criteria vector|
|8: for r=1:R do⊳ For each subsequence|
|9: ⊳ Obtain best subsequence|
|10: ⊳ Update time for the following section|
|11: end for|
|12: return ⊳ Complete UAV trajectory and sensor poses|
3.2. GA Optimizer
This section details the most relevant aspects of the GA used to determine the best UAV trajectory and sensor pose for each section ( with ) of the whole solution . We start presenting the GA genotype, continue with the main GA steps and operators, and finish with the evaluation of each possible solution .
3.2.1. GA Decision Variables
The decision variables directly manipulated by the genetic operators (i.e., the GA genotype) of the MTS planner are the subsequence of control actions used to obtain, by means of the UAV motion model, the -th subsection of the UAV trajectories (i.e., the GA phenotype or solution). Besides, the actions in are the commanded UAV heading , speed , and height and the camera elevation and azimuth . Additionally, the number of decision variables is reduced by applying each action constantly during a time lapse and by adapting the actions required for each UAV within each -th subsection to the time steps where the UAV is engaged in the MTS mission. In other words, the GA algorithm only obtains the values of sets of actions (UAV heading, speed, and height; camera elevation and azimuth) for each UAV engaged in the mission during the -th trajectory subsection.
3.2.2. GA General Description
The GA that sequentially optimizes each of the trajectory sections based on the set of command actions () and fitness criteria (which will be explained in the following section) maintains the operators, parameters, and structure of a standard GA and includes a new step to precalculate the number of actions required by each UAV.
To present it in an algorithmic form, we extend the notation of the paper with left superindexes ( or ) and left subindexes ( and ). The first superindex is used to indicate that there is a population or group of action sequences and fitness criteria vectors . The second one is used to identify the best action sequence , UAV trajectories and sensor poses , and fitness criteria vector within the population of solutions. And the left subindexes are incorporated to the previous notation when the algorithm needs to distinguish between the generic population elements (, ) and those associated with the parents selected by the GA () and with their children (, ).
The GA in Algorithm 2 performs the following steps. First, step 3 computes the number of actions of duration required by each UAV in the current subsection of the trajectory, taking into account the current time step and each UAV starting and ending engaging time ( and ). Step 4 initializes a random population of the J action sequences required for the UAVs engaged in the mission, using a parametrizable uniform distribution. Steps 5-7 simulate the UAV trajectories and sensor poses from the sequence of actions and evaluate them according to the criteria presented in the following section. In step 9, the GA enters an optimization loop which ends when a fixed computation time has passed. Next, step 10 selects parents of action sequences from the current population using binary tournament selection; step 11 creates the children by combining pairs of parents using a single-point crossover; step 12 slightly modifies the children with an incremental Gaussian noise mutation; steps 13-15 simulate and evaluate them; and step 16 selects the survival population from the previous and the mutated children using NSGA-II recombination . Once the computation time has finished, step 19 identifies the best solution in the population; step 20 obtains the , trajectories and sensor poses, and “unnormalized belief” associated with it; and step 21 returns them.
|1: procedure OPTIMGA(, , )|
|2: Get population size|
|3: Obtain number of actions for each UAV|
|4: Generate J sequences with the required actions|
|5: for j=1:J do|
|6: Simulate and evaluate|
|7: end for|
|8: Initialize counter of the algorithm iterations (generations)|
|9: while do Main optimization loop|
|10: Select pair of parents (tournament selection)|
|11: Perform crossover of parents (single point crossover)|
|12: Mutate children (incremental gaussian mutation)|
|13: for j=1:J do|
|14: Simulate and evaluate|
|15: end for|
|17: Increment the GA generation number|
|18: end while|
|19: Select best solution|
|20: Sim. and evaluate it|
|21: return Return best solution|
|22: end procedure|
It is worth noting that there are several steps whose behavior can be configured using the information in the configuration variable. The initialization parameters in step 4 allow, for each UAV and type of action, (1) selecting different bounds for the uniform distribution and (2) enabling/disabling its optimization. These options let our MTS planner generate trajectories with some fixed behavior (e.g., fixed height or fixed camera orientations). The stop condition parameter in step 9 allows fixing the computation time of the GA algorithm. The crossover parameter in step 11 allows selecting the percentage of parents that undergo the single-point crossover or are directly copied as children. Finally, the mutation parameter in step 12 allows selecting the Gaussian noise that is added to all variables in and the Gaussian noise that is added to a few variables uniformly selected in with a probability , where is the number of decision variables in .
Besides, there are three steps of the GA that use the fitness criteria variable to select solutions of the population to become pairs of parents (step 10), survive in the next generation (step 16), and become the best solution (step 19). Its behavior will be further detailed in the following section, after explaining the optimization criteria used in our MTS planner.
3.2.3. Simulation and Evaluation Process
In order to evaluate one of the solutions, subsequence of action commands , manipulated and proposed by the GA, first, the motion model in Figure 5 or in the appendix has to be simulated for each UAV, and, second, the fitness criteria have to be evaluated. The planner has also to take into account the UAVs mission engaging time and the possible differences among the time lapses of the problem (, , , ). In the remainder of this section we describe the simulation process and the fitness criteria evaluation, and the steps of the function, presented in Algorithm 3 and used by the GA in Algorithm 2.
|1: procedure SIM&EVAL|
|4: Simulation of the proposed sequence of actions to obtain UAV trajectory and fuel consumption|
|5: for u=1:U do|
|6: [, Simulate each UAV and sensor movement|
|8: end for|
|9: Evaluation of the trajectory obtained from the proposed sequence of actions|
|10: Update collision criterion|
|11: Update fuel consumption|
|12: Update smooth criterion|
|13: Update fuel consumption|
|14: Simulation of the target and evaluation of the probabilistic criteria|
|16: while do|