Abstract
Due to the capability of fast deployment and controllable mobility, unmanned aerial vehicles (UAVs) play an important role in mobile crowdsensing (MCS). However, constrained by limited battery capacity, UAVs cannot serve a wide area. In response to this problem, the ground vehicle is introduced and used to transport, release, and recycle UAVs. However, existing works only consider a special scenario: one ground vehicle with multiple UAVs. In this paper, we consider a more general scenario: multiple ground vehicles with multiple UAVs. We formalize the multi-vehicle-assisted multi-UAV path planning problem, which is a joint route planning and task assignment problem (RPTSP). To solve RPTSP, an efficient multi-vehicle-assisted multi-UAV path planning algorithm (MVP) is proposed. In MVP, we first allocate the detecting points to proper parking spots and then propose an efficient heuristic allocation algorithm EHA to plan the paths of ground vehicles. Besides, a genetic algorithm and reinforcement learning are utilized to plan the paths of UAVs. MVP maximizes the profits of an MCS carrier with a response time constraint and minimizes the number of employed vehicles. Finally, performance evaluation demonstrates that MVP outperforms the baseline algorithm.
1. Introduction
In recent years, due to the massive increase in sensor-rich mobile devices, mobile crowdsensing (MCS) [1] has emerged as a new way of sensing, which relies on a crowd of personal mobile phones, tablet computers, and other smart gadgets to perform large-scale tasks. While traditional sensing technologies incur large overheads due to the deployment of lots of sensors, MCS just needs to pay some incentive rewards to attract individuals to perform sensing tasks, which is obviously more cost-effective. Therefore, MCS has been used in a lot of valuable applications recently, such as detecting air quality and collecting traffic information [2, 3].
In addition, tremendous progress in the research of microelectromechanical systems has enabled UAVs to enter the civilian market. Since UAVs are economical, flexible, and easy to operate, they have been widely used in agriculture, geological exploration, military, and other fields [4–6]. Due to the high mobility and fast deployment, UAVs can collect various data anywhere and anytime when equipped with rich sensors. They could also be used to collect data in regions where ground vehicles are difficult to reach, e.g., flood hazard areas. With the increasing popularity of UAVs, more and more researchers began to introduce UAVs into the MCS to get better performance.
Despite the mentioned benefits, the hovering time of UAVs is quite constrained by limited battery capacity, which prevents them from serving a wide area. To solve the problem, in practice, ground vehicles are utilized to transport UAVs to collect data. In addition, UAVs will fly back to a ground vehicle to charge themselves after completing sensing tasks. The so-called vehicle-assisted UAV sensing benefits from both the long driving distance of the vehicle and high flexibility of UAVs [7].
After introducing the ground vehicle, efficient path planning and scheduling of drones and ground vehicle become a key issue. There are lots of researches that are dedicated to optimizing the routing and scheduling of vehicle-assisted UAVs for the transporting of parcels [8–10], wherein vehicles can also visit the customers to deliver parcels. These works are inappropriate for the vehicle-drone cooperative sensing problem studied in this paper, wherein a vehicle is only used to transport UAVs. In the meantime, only few researches studied the routing of the vehicle-drone cooperative sensing system [11, 12]. [11] assumes that the vehicle has only one drone, and [12] considers one vehicle with multiple drones and proposes algorithms to collect sensing data in multiple target points simultaneously.
However, all the existing researches suppose that there is only one ground vehicle. In real world, we need to employ multiple vehicles with multiple drones to perform sensing tasks simultaneously. Compared with relying on only one ground vehicle, using multiple vehicles can effectively improve the efficiency of the MCS system.
When involving multiple vehicles, path planning of vehicles and drones becomes a more complex problem, which cannot be converted into a classic optimization problem. In this paper, we formalize the joint route planning and task assignment problem (RPTSP). To simplify RPTSP, we divide it into two subproblems: task assignment problem and path planning problem. In our scenario, the second subproblem includes multivehicle path planning and multi-UAV path planning. We propose an efficient heuristic allocation algorithm EHA to determine the paths of multiple vehicles, which actually solves the task assignment problem and multivehicle path planning problem. EHA utilizes an iterative process to assign tasks for each ground vehicle based on the expenses incurred and the time consumed. Besides, to solve the multi-UAV path planning problem, we transformed it into a Multiple Traveling Salesmen Problem (MTSP). After that, the genetic algorithm and reinforcement learning are utilized to solve MTSP. In general, our goal is to maximize the profits of the MCS carrier with a time budget by globally optimizing the assignment of tasks and the route of vehicles and UAVs. Besides, we hope that the number of employed ground vehicles can be minimized.
To the best of our knowledge, we are the first to consider the task assignment and the routing problem for multi-vehicle-assisted multi-UAVs in MCS. The contributions of this work are summarized as follows: (i)We introduce multiple vehicles to the vehicle-drone cooperative sensing system and formalize the joint route planning and task assignment problem (RPTSP).(ii)To solve RPTSP, we propose a multi-vehicle-assisted multi-UAV path planning algorithm (MVP), which maximizes the profits of the MCS carrier with a response time constraint(iii)Extensive experiments are conducted, and the results show that MVP outperforms the baseline algorithm
The remainder of this paper is organized as follows. Section 2 gives an overview of the existing work related to the problem that we are addressing. Section 3 describes the system model and problem formulation. Section 4 illustrates the details of MVP. Section 5 introduces the simulation experiment. Section 6 concludes this paper.
2. Related Work
Recently, a lot of researches have been conducted on MCS. These works take different methods to perform sensing tasks in different application scenarios: (1) the MCS system relies on individuals’ smart devices to perform sensing tasks, (2) it utilizes one vehicle and one drone to perform sensing tasks, (3) it utilizes one vehicle and multiple drones to perform sensing tasks. In the following, we will describe these works.
2.1. MCS Utilizing the Smart Devices
For traditional MCS that relies on the smart gadgets possessed by individuals, there have been many studies on assigning sensing tasks to participants.
These studies have different optimization objectives; e.g., He et al. [13] devised an efficient local ratio-based algorithm to maximize the benefits of the MCS carrier under a time budget constraint; Xiong et al. [14] introduced an incentive allocation framework to minimize total incentive payment while ensuring predefined spatial-temporal coverage; and Shi et al. [15] designed a crowdsensing task assignment mechanism to maximize the task completion rate under a predefined incentive budget.
2.2. MCS Utilizing One Vehicle and One Drone
When evolving towards MCS architectures consisting of UAVs and vehicles, route planning should be well designed to minimize the consumed time or rewards in performing sensing tasks.
Chen et al. [16] designed a trajectory segment selection scheme to remove data redundancy and improve the coverage quality. Luo et al. [11] proposed two heuristic algorithms to solve the two-echelon cooperated routing problem for the ground vehicle and its carried drone. Savuran and Karakaya [17] proposed a path optimization method that allows the vehicle to keep moving when the drone is performing tasks. However, the above works all consider the case of only one drone, which makes it impossible for vehicle-assisted UAVs to perform multiple tasks in parallel.
2.3. MCS Utilizing One Vehicle and Multiple Drones
As for the case of using multiple UAVs, Hu et al. [18] proposed a vehicle-assisted multi-UAV routing and scheduling algorithm (VURA). It works by iteratively deriving solutions based on UAV routes picked from the memory that contains flight paths of drones. Through continuous joint optimization of parking spot selection, path planning, and tour assignment, VURA can produce a final appropriate solution. In [12], Hu et al. proposed a novel algorithm (VAMU) based on VURA, which schedules the multiple drones to be launched and recycled in different places. It avoids the time wastage when the vehicle waits for drones to return and thus reduces the time required to complete all tasks.
However, the works mentioned above did not take multiple vehicles into account. In reality, the MCS carrier needs to employ multiple vehicles with multiple drones to perform sensing tasks simultaneously, which significantly improves the efficiency of the MCS system. Therefore, this motivates us to consider a more general scenario, in which multiple vehicles and multiple drones are used to perform sensing tasks allocated by the MCS system.
3. System Model and Problem Formulation
3.1. System Model
In our model, there are several ground vehicles with different numbers of UAVs. A set of detecting points that UAVs need to visit and collect data are distributed in a target region. Every detecting point needs to be visited once only by a UAV. And a UAV can visit multiple detecting points (i.e., perform sensing tasks) consecutively in a single flight. Due to the limited battery capacity, UAVs cannot serve a wide area. To solve the problem and maximize the efficiency in the meantime, this paper employs multiple ground vehicles to assist UAVs.
As shown in Figure 1, with the aid of ground vehicles, the UAVs can visit detecting points distributed in a very large region. From the starting point, a ground vehicle transports drones to the preselected parking spots sequentially. Once a ground vehicle arrives at a parking spot, the UAVs carried by it are released to perform sensing tasks in nearby detecting points. When the UAVs finish their missions of one single trip, they return to the corresponding ground vehicle to be charged. Notice that it is possible for a UAV to be launched and recycled multiple times in each parking spot. When all detection points near the parking spot have been visited by UAVs, the ground vehicle with multiple UAVs leaves to the next selected parking spot. Once all detecting points are visited, the whole sensing task in the target region is finished.

The consumed time and the generated incentive rewards must be considered when the MCS system assigns sensing tasks and plans the route of vehicles as well as UAVs.
For simplicity, we make the following hypothesis: (i)The MCS system knows the coordinates of each detecting point(ii)Each detecting point should be accessed once(iii)The time consumed by the UAV to perform the sensing task in a detecting point is constant(iv)Both the ground vehicle and UAVs travel at a constant speed [12, 18]
3.2. Problem Formulation
Let be an undirected graph where is the set of vertex and is the set of edge. And is divided into and , which represent the set of detecting points and the set of candidate parking spots, respectively. represents the number of detecting points, and represents the number of parking spots. The distance of point and is denoted by , where the points and represent the detecting point or parking spot.
Ground vehicles we employed are expressed as , where represents the total number of ground vehicles. And denotes the set of UAVs that the vehicle possesses, where represents the number of drones owned by vehicle . , where denotes the detecting points that the UAVs possessed by need to access. Besides, we use and to represent the speed of the vehicles and the UAVs, respectively. represents the maximum flight distance of UAV . In this paper, the main objective is to jointly optimize the sensing task assignment and the path planning of ground vehicles and UAVs such that the incentive rewards produced are minimized with a time budget constraint that is denoted as .
We need to address the following issues: first, we should think out how to select the proper parking points from the roads lying on the target region. There are infinite points that can be selected as candidate parking spots in the road network. What we need to do is to select some points as parking spots from this infinite number of points and assign detecting points to these parking spots. The detecting points that are assigned to parking spot are represented as (). It should be guaranteed that when the vehicle is parked at these points, the drones carried on the vehicle can access all the detecting points.
Then, we need to plan the flight paths of UAVs when a ground vehicle arrives at parking spot . In our scenario, each drone performs a trip by visiting the detecting points along its route sequentially. Since we employ UAVs to perform tasks in parallel, the flight paths of UAVs possessed by ground vehicle can be denoted as when the ground vehicle is parking at the parking spot . In detail, the flight path of at parking spot is expressed as .
We use to denote whether the route contains the detecting point , where and . In detail, means that the detecting point is included in the route . Furthermore, () is used to express whether the points and are adjacent in the route . For example, indicates that and are adjacent in the route . Then, the total flight distance of the UAV at the parking spot can be defined as follows:
Based on the above equation, the total distance that all UAVs of the ground vehicle travel at the parking spot is given as follows:
Besides, a sorted set of parking spots needs to be determined for each vehicle to minimize the generated incentive rewards with a time budget constraint. We use where to represent the route for the ground vehicle . We use to denote whether the points and are adjacent in the vehicle route () or not (=0). Then, the travel distance of the ground vehicle is expressed as follows:
Since the distance of UAVs and the vehicle that carries them is calculated, we can easily get the time consumed by vehicle with its UAVs in the whole sensing task:
Thus, the total time cost by all vehicles and UAVs is defined as follows:
Finally, the calculation of the incentive rewards generated throughout the execution of all sensing tasks should also be confirmed. In this paper, for a ground vehicle, its revenue depends on the total flight distance of its UAVs and the travel distance of its ground vehicle. Therefore, the final incentive rewards earned by a ground vehicle are calculated as follows: where represents the base incentive rewards that a ground vehicle can get if it has accepted the sensing tasks and completed assigned tasks. and are the price coefficients for the UAV and the vehicle, respectively, which are used to transform distance to the price.
In our scenario, for the MCS carrier, the income is proportional to the number of detection points. Besides, for an actual sensing task, the income is a fixed value, which means that maximizing profits equals minimizing expenses. Then, our objective can be transformed to minimize the generated incentive rewards. The total incentive reward cost, expressed as , includes all the revenues of each ground vehicle in the solution. The algorithm proposed in this paper is aimed at minimizing the total incentive reward consumption.
Constraint (8) confirms that at least one ground vehicle is employed. Constraint (9) emphasizes the requirement that every ground vehicle has at least one drone. Constraint (10) means that the flight distance of a UAV should be less than the maximum flight distance. Constraint (11) indicates that the time consumption in a solution should not exceed the time budget. Constraint (12) ensures that all detecting points of each parking spot are visited. The notation and terminology used throughout the paper are shown in Table 1.
4. Algorithm Design
In this section, we first introduce some challenges encountered in designing the efficient solution. Then, we propose MVP to solve these problems. After that, we provide a detailed description of each part of MVP.
4.1. The Challenges
In order to complete the whole sensing mission effectively, we need to solve the following problems: (1)How to allocate each detecting point to a proper parking spot? In our scenario, the detecting points are randomly distributed in a large target region, and the parking spots are sampled from the points of the road network. To reduce the distance between the detection point and the parking point, for each detecting point, a naive method is to allocate it to the closest candidate parking spot, which is selected every other distance in the roads. However, the shortest distance between parking spots and detecting points does not mean the smallest time cost for the UAVs to visit those detecting points. It is illustrated by the following examples
First, as shown in Figure 2(a), some parking spots possess few detecting points. When ground vehicles traverse these points, too few detecting points will cause some drones to sit idle. This will incur unnecessary time wastage. Besides, as shown in Figure 2(b), the detecting point is assigned to the parking spot due to the shortest distance between them. However, other detecting points assigned to the parking spot are far from . Thus, it will incur a relatively large time cost when a UAV traverses detecting points. (2)How to allocate the parking spots to ground vehicles? Unlike previous studies, we employ multiple vehicles in the meantime. We transform the problem of assigning sensing tasks into the parking spot allocation problem. This problem can be defined as follows: given a set of parking spots, a set of detecting points, and the corresponding relation between them, we need to determine the route for each employed vehicle to minimize the produced incentive rewards with a time budget constraint. Notice that each parking spot can be visited exactly once by only one ground vehicle, and the number of employed ground vehicles is not determined until all parking spots are allocated(3)How to plan the paths of UAVs when a ground vehicle arrives at a parking spot? After the allocation of the detecting points, we know exactly the corresponding detecting points of each parking spot. When a ground vehicle arrives at a parking spot, we should plan the paths of UAVs. Since every vehicle possesses multiple drones, the problem can be transformed into the multiple travelling salesman problem (MTSP). The MTSP in our scenario is illustrated as follows: given a parking spot and a set of detecting points allocated to it, we need to find out a flight path for each drone such that the total cost of time is minimized and that each detecting point is visited exactly once by only one drone

(a) Situation 1

(b) Situation 2
4.2. The Overall Architecture of MVP
The overall architecture of MVP is shown in Figure 3. First, we mark a point on roads every other distance to construct the set of initial parking spots. After that, the detecting points are clustered by allocating them to different parking spots according to their distance to the parking spots. Then, in order to improve the performance of MVP, several optimizations are used to reallocate detecting points. Subsequently, parking spots are allocated to ground vehicles in an iterative process. In each iteration, we can determine the route of one ground vehicle and the paths of UAVs at each parking spot. The route of is represented as , and the paths of UAVs at parking spot are represented as , where denotes the flight path of at the parking spot . The ground vehicle will access the parking spots in the route sequentially. Besides, will release its UAVs to visit the detecting points along the paths in when it arrives at the parking spot . When the iterative process ends, we get the routes of all ground vehicles and the paths of their UAVs at each parking spot, which ensures that all detecting points in the target region have been accessed.

4.3. Parking Spot Selection
The selection of the parking spots for detecting points is divided into two steps: the initialization of the parking spots and their optimization. The details of the steps are described as follows.
4.3.1. Initialization
First, a set of candidate parking spots is determined by sampling points on roads every other distance. Then, for each detecting point, we calculate the distance between it and its nearby candidate parking spot. After that, the nearest candidate parking spot of each detecting point is determined. These parking spots are composed of the initial set of parking spots. Besides, there is another method to mark the candidate parking spots when detecting points are distributed nonuniformly. It arranges the candidate parking spots nonuniformly. The process is described as follows: (1)For each of the roads, we first mark the endpoint as a candidate parking spot(2)Then, we calculate the density of detecting points around . If the density is big, we mark the next candidate parking spot at a location close to the . If the density is small, we mark the next candidate parking spot at a location far from (3)After that, we calculate the density of detecting points around the second candidate parking spot and repeat the process above to mark all candidate parking spots of the road
We have performed some experiments to compare these two methods. The results of the experiments are shown in Figures 4–6.



4.3.2. Optimization
Choosing the nearest candidate parking spot is not always the best strategy. Therefore, in order to get better performance, we need to optimize the allocation of detecting points. (a)As shown in Figure 2(a), there are many selected parking spots that have few detecting points. It will incur unnecessary time cost if the ground vehicle parks at these parking spots and releases its UAVs to access corresponding detecting points. Therefore, the detecting points assigned to these parking spots should be reallocated. In detail, for a selected parking spot , if the number of detecting points is less than a predefined value , we will remove from the set of parking spots and reallocate each detecting point of to the second closest parking spot to it. The procedure will be repeated until all selected parking spots have at least detecting points. In the experimental part, we will investigate the impact of value by performing some experiments(b)As for the situation shown in Figure 2(b), it is obviously a better choice to allocate the detecting point to parking spot as it avoids the waste of flight time of drones. Therefore, the detecting point in such a situation should be reallocated. We use to denote the circle with as the center and as the radius. Besides, represents the detecting points of the parking spot . For a detecting point of a parking spot , we suppose that is the nearest detecting point to it in and the distance between and is . Furthermore, is supposed to be the second nearest parking spot to . If there is at least one detecting point in that is inside the circle , the detecting point will be reallocated to
The pseudocode of the reallocation procedure is shown in Algorithm 1.
|
|
|
4.4. Allocation of Parking Spots
After the selection of parking spots, for each parking spot, we allocate it to a proper ground vehicle. The travel route of each ground vehicle and the flight paths of corresponding drones at each parking spot are determined in the procedure of allocation. In a word, we will get the final solution of RPTSP when the procedure ends. An efficient heuristic allocation algorithm EHA is proposed to effectively allocate the parking spots.
In the procedure of allocation, for a ground vehicle , we need to find the nearest neighbor parking spots to . A naive method is to calculate the distances between and all parking spots, which obviously incurs a large amount of time consumption. To improve the performance, in EHA, R-tree [19] is introduced to index the locations of all parking spots. After that, we utilize the branch-and-bound R-tree traversal algorithm proposed in [20] to find the nearest neighbor parking spots to .
The details of EHA are described in Algorithm 2. EHA starts with building R-tree , which utilizes the locations of all parking spots (line 2). Then, it chooses a vehicle (line 5) and iterates to generate ’s candidate routes until the consumed time exceeds the time budget constraint (lines 8-27). In each iteration, we search to get nearby parking spots (line 9), from which the parking spot that produces minimum incentive rewards is selected (lines 12-23). The procedure is repeated until all vehicles’ candidate routes have been determined (lines 4-29). Then, we choose the vehicle with maximum and add corresponding routes into the solution (lines 30-31). The above process will keep looping until all parking spots have been allocated (lines 3-33).
Algorithm 2 is designed to minimize the generated incentive rewards with a time budget constraint. In the meantime, it minimizes the number of employed ground vehicles.
4.5. Path Planning of UAVs
When a ground vehicle arrives at a parking spot, the paths of its UAVs need to be planned. Notice that the path planning of UAVs is involved in the procedure of allocating parking spots for ground vehicles, as shown in Algorithm 2. As mentioned before, this problem can be transformed into the MTSP, which is a typical NP-hard problem. There have been some works that concern on the trajectory scheduling of mobile vehicles [21–23]. In this paper, we adopt the genetic algorithm (GA) and reinforcement learning (RL) to solve MTSP, respectively. For convenience, MVP that uses GA to plan the paths of UAVs is named GA-MVP and MVP that uses RL to plan the paths of UAVs is named RL-MVP.
4.5.1. Genetic Algorithm
GA is a search algorithm used in computational mathematics to solve optimization. And it is a type of evolutionary algorithm. It has been widely used in various combinatorial optimization problems. The procedure of planning paths of UAVs based on GA is illustrated in Algorithm 3.
In the algorithm, one individual of the population is a 2D array, which is the respective routes taken by UAVs. and are the probability of crossover and the probability of mutation, respectively. Besides, the fitness function is defined as the reciprocal of the flight distance of UAVs.
The algorithm first randomly generates some individuals as the initial population (line 1) and then starts an iterative process to evolve the initial population (lines 2-16). In each iteration, the fitness of each individual in the population is calculated (line 3). Besides, we take out two individuals that have the largest fitness iteratively and perform crossover on these two individuals with the probability to generate a new individual (lines 6-9). After that, the mutation is performed on the new individual with the probability (lines 10-12). When the iteration ends, we get a new population (line 15). Finally, we select the individual with maximum fitness in the as the final routes of UAVs (lines 17-18).
4.5.2. Reinforcement Learning
Reinforcement learning is an area of machine learning that learns what to do in an environment to maximize a numerical reward. Since a traditional heuristic algorithm for solving combinatorial optimization problems may often be suboptimal due to the hard nature of the problems, RL is a good alternative to search the solution. In our work, we adopt the learning-based approach in [24] to optimize the MTSP, i.e., the path planning of UAVs.
4.6. Algorithm Complexity Analysis
In this section, we analyze the time complexity of GA-MVP. GA needs to evolve the population times. In each evolving process, it will perform crossover or mutation action to generate a new population that contains individuals. Hence, the running time complexity of GA will be . To generate the route of a ground vehicle, GA-MVP needs to generate candidate routes. For each candidate route, GA-MVP needs to search at most parking spots. Then, GA will be used to solve MTSP in each parking spot. Since we will generate at most routes, the time complexity of GA-MVP is .
5. Experimental Simulation
In this section, we evaluate the proposed algorithm through some simulation experiments. We use the simulator implemented in [25]. In our experiments, a number of detecting points are distributed in the region, in which 1 unit represents 50 m in reality. The roads in the region are generated randomly. For detecting points, we consider two types of distribution: uniform distribution and nonuniform distribution. Besides, a set of ground vehicles is distributed randomly in the target region and is waiting to be employed. The hovering time and speed of all drones possessed by different ground vehicles are identical. However, different ground vehicles may have varying numbers of drones. The speed of drones and ground vehicles are set as 5 m/s and 10 m/s, respectively.
Since we are the first to study the path planning of multi-vehicle-assisted multi-UAVs, there is no existing algorithm that supports employing multiple vehicles. Hence, to evaluate the performance of MVP in the environment of multiple vehicles, we design a naive greedy algorithm (GRE) as the baseline algorithm. In GRE, we utilize the method in VAMU [12] to select parking spots and allocate detecting points. The allocation of parking spots in GRE is greedy as it simply employs all ground vehicles and allocates the closest parking spots to each of them. After that, for each ground vehicle, GRE utilizes GA to plan the path of the ground vehicle and UAVs. Besides, to make the experiments more complete, we compare MVP with VURA [18]. VURA is a vehicle-assisted multi-UAV routing and scheduling algorithm, but it only employs one vehicle. So the number of available vehicles of MVP is limited to 1 when compared with VURA. In a word, we compare MVP with GRE in the environment of multiple vehicles but compare MVP with VURA in the environment of a single vehicle.
In our experiments, we focus on two metrics. The first is the incentive cost of the algorithm. It is defined by the total incentive rewards produced in the whole procedure of performing all sensing tasks. The second metric is the number of employed vehicles. The smaller the number of required vehicles, the better the performance of the algorithm in the situation where vehicles may not be enough.
We designed some experiments to compare the performance of MVP and GRE. First, we varied the number of tasks (i.e., the number of detecting points) to study the impact of the number of tasks. Besides, we also investigated the impact of the number of tasks on the travel distance of UAVs and vehicles. Then, the time budget is varied to study its impact. After that, we varied the speed of UAVs and the speed of ground vehicles, respectively. Finally, several experiments are performed to investigate the impact of for MVP. For each of the above experiments, we performed it in two environments: (1) uniform environment: detecting points are distributed uniformly, and (2) nonuniform environment: detecting points are distributed nonuniformly.
5.1. The Impact of the Number of Tasks in the Environment of Multiple Vehicles
Figure 7 shows the impact of the number of sensing tasks on the generated incentive rewards. It can be observed that the incentive rewards generated in all algorithms are positively related to the number of sensing tasks. Moreover, GA-MVP and RL-MVP outperform GRE significantly. Besides, RL-MVP outperforms GA-MVP, which represents that RL performs better than GA in the path planning of UAVs.

(a) Uniform distribution

(b) Nonuniform distribution
Figures 8 and 9 present the impact of the number of tasks on the total distance of drones and vehicles, respectively. Figure 8 shows that the total flight distance of all drones in all algorithms is approximately the same regardless of the varying number of tasks. Details shown in Figure 9 indicate that the total travel distance of all vehicles in GA-MVP and RL-MVP is smaller than that of GRE, which is contributed by the strategy of allocating parking spots in MVP. Nevertheless, the total travel distance of all vehicles increases as the number of tasks increases.

(a) Uniform distribution

(b) Nonuniform distribution

(a) Uniform distribution

(b) Nonuniform distribution
Figure 10 shows that GA-MVP and RL-MVP perform better than GRE in terms of the number of employed vehicles. The number of employed vehicles in GRE is a constant as GRE simply employs all candidate vehicles to perform tasks. Besides, from an overall perspective, the number of employed vehicles in GA-MVP and RL-MVP increases as the number of tasks increases. This is because when the number of tasks increases, more ground vehicles need to be employed due to the fixed time budget.

(a) Uniform distribution

(b) Nonuniform distribution
5.2. The Impact of the Time Budget in the Environment of Multiple Vehicles
Figure 11 indicates that GA-MVP and RL-MVP outperform GRE regardless of the varying time budget. It can be seen that the performance of GRE is not affected regardless of the varying time budget. This is because the time cost of GRE will not exceed the time budget as there are enough ground vehicles. Thus, the time budget does not affect the performance of GRE. Besides, the generated incentive rewards in GA-MVP and RL-MVP decrease as time budget increases in the uniform environment but almost remains unchanged in the nonuniform environment. This is because the number of employed vehicles almost cannot decrease anymore in nonuniform environment.

(a) Uniform distribution

(b) Nonuniform distribution
5.3. The Impact of the Number of Tasks in the Environment of a Single Vehicle
Figures 12–14 compare RL-MVP and GA-MVP with VURA in the environment of a single vehicle. Figure 12 indicates that the incentive rewards generated in all algorithms are positively related to the number of sensing tasks. It can be observed that RL-MVP outperforms other algorithms in both uniform and nonuniform environments. Besides, VURA performs as well as GA-MVP in the uniform environment but does not in the nonuniform environment. Figures 13 and 14 show that GA-MVP and RL-MVP outperform VURA in the path planning of UAVs but are inferior to VURA in the path planning of the vehicle. The results are acceptable for us since MVP is designed to solve the problem in the environment of multiple vehicles rather than in the environment of a single vehicle.

(a) Uniform distribution

(b) Nonuniform distribution

(a) Uniform distribution

(b) Nonuniform distribution

(a) Uniform distribution

(b) Nonuniform distribution
5.4. The Impact of
Figure 15 indicates that the generated incentive rewards in MVP decrease as the value of increases. Besides, we can observe that the generated incentive rewards reduce more gently in the nonuniform environment. This is because when the detecting points are distributed nonuniformly, most parking spots have enough detecting points, which will not cause some drones to sit idle. Finally, as shown in Figure 15, the value of can be set to 6 so that MVP can work well in both uniform environment and nonuniform environment.

5.5. The Impact of the Method of Selecting Candidate Parking Spot
Figure 4 presents the total time cost of arranging candidate parking spots, allocating detecting points, and optimizing the allocation in both methods. It can be noticed that the method that arranges candidate parking spots nonuniformly costs more time than the other method. This is because the calculation of density is time-consuming. Besides, Figures 5 and 6 indicate that the generated incentive rewards in both methods are approximately the same.
In a word, RL-MVP always performs better significantly, i.e., generating fewer incentive rewards and requiring fewer ground vehicles when the number of tasks, the time budget, the speed of UAVs, or the speed of ground vehicles varies.
6. Conclusion
In this paper, we propose an efficient algorithm called MVP to address the multi-vehicle-assisted multi-UAVs path planning problem in MCS. To the best of our knowledge, we are the first to take multiple ground vehicles into consideration. In MVP, at first, the detecting points are allocated to proper parking spots. Subsequently, we propose an efficient heuristic allocation algorithm EHA to plan the paths of ground vehicles. Besides, the genetic algorithm and reinforcement learning are utilized to plan the paths of UAVs. Simulation results show that RL-MVP outperforms the other algorithms in terms of the generated incentive rewards in most cases. In the environment of multiple vehicles, GRE produces about - more incentive rewards than RL-MVP and GA-MVP produces about more incentive rewards than RL-MVP. In the environment where only one vehicle is available and detecting points are distributed nonuniformly, VURA produces about more incentive rewards than RL-MVP and GA-MVP produces about more incentive rewards than RL-MVP.
In this work, we assume that all UAVs have the same flying speed and hovering time. However, in the reality, one ground vehicle may carry different kinds of UAVs. And different types of UAVs have different flight speeds and hovering time. Hence, as for further work, we will consider the situation where the UAVs carried by one vehicle may be heterogeneous.
Data Availability
The data used to support the findings of this study are included within the article
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work is supported by the National Natural Science Foundation of China under Grant No. U20B2050 and the Science and Technology Funds from National State Grid Ltd. (the Research on Key Technologies of Distributed Parallel Database Storage and Processing based on Big Data).