Abstract

Increasingly inexpensive unmanned aerial vehicles (UAVs) are helpful for searching and tracking moving objects in ground events. Previous works either have assumed that data about the targets are sufficiently available, or they solely rely on on-board electronics (e.g., camera and radar) to chase them. In a searching mission, path planning is essentially preprogrammed before taking off. Meanwhile, a large-scale wireless sensor network (WSN) is a promising means for monitoring events continuously over immense areas. Due to disadvantageous networking conditions, it is nevertheless hard to maintain a centralized database with sufficient data to instantly estimate target positions. In this paper, we therefore propose an online self-navigation strategy for a UAV-WSN integrated system to supervise moving objects. A UAV on duty exploits data collected on the move from ground sensors together with its own sensing information. The UAV autonomously executes edge processing on the available data to find the best direction toward a target. The designed system eliminates the need of any centralized database (fed continuously by ground sensors) in making navigation decisions. We employ a local bivariate regression to formulate acquired sensor data, which lets the UAV optimally adjust its flying direction, synchronously to reported data and object motion. In addition, we also construct a comprehensive searching and tracking framework in which the UAV flexibly sets its operation mode. As a result, least communication and computation overhead is actually induced. Numerical results obtained from NS-3 and Matlab cosimulations have shown that the designed framework is clearly promising in terms of accuracy and overhead costs.

1. Introduction

Recently, combination of unmanned aerial vehicles (UAVs) and a wireless sensor network (WSN) has been attracting much attention from the research community. Increasingly inexpensive UAVs, thanks to their mobility and flexibility, may efficiently help maintain the network connectivity and relay data [1, 2]. While WSN is an appropriate solution to continuously gather data, its network nodes are not easily relocated and their wireless communication links are essentially unreliable [13]. This hinders capturing a full view over a large area. It challenges against instant operations in response to sporadic events as well. If these events are associated with moving objects, the problem is even more complicated [4, 5]. On the other hand, increasingly popular UAVs may effectively compensate the gap, thanks to its high dynamism and autonomy. If being navigated smartly, they may quickly approach event places, monitoring their evolution and even getting involved in handling operations [68]. However, over immense spaces such as fields, parks, forests, national borders, seas, and deserts, UAVs alone cannot search elaborately and extensively due to their limited flight time and distance. Their embedded sensing devices can merely “see” the ground underneath at a certain range. This desires data collection from ground sensors to enhance the searching capability of UAVs. As such, mutually compensated power of UAV and WSN means makes the combination suitable for many supervising applications, such as monitoring intruders, animals, fire, and vehicles.

Practically, connectivity in WSN is quite hard to be kept continuous, obstructing data concentration on gateways placed at long distances from sensors. This definitely hampers construction of an adequate measurement database for calculating the exact object location [9, 10]. However, known searching and tracking strategies in literature heavily rely on such a centralized database. Locating these objects desires heavy data exchange and storage of history statistics [4, 5, 11]. Sensors are assumed to be densely populated and robust to provide sufficient statistics continuously, which is unrealistic in large areas. Additionally, it is costly in terms of energy and communication to collect data from distant sensors of many hops away [12, 13].

Technically, an event occurrence is expressed by dramatic changes of one or a few parameters, such as gas volume, sound wave, light, and radiation in the affected region [8, 11, 14, 15]. Unfortunately, data about the event may not be fully reported by sensors at once. Consequently, the system cannot immediately locate and track objects. Sometimes, necessary statistics can only be collected to accurately pinpoint a target object after the UAV flies through a relatively large suspected area [912]. This means that preprogramming the UAV trajectory before taking off cannot provide the optimal flight path.

By analyzing radio signals intensively, several works have introduced radio-based localization algorithms in literature [1618]. Namely, radio signals are processed to predict the current target position. The authors of [16] introduce a trilateration algorithm in combination with the LMS (least mean square) method. The strategy proposes to insightfully learn the channel model to draw localization inference about the target position. Similarly, another RSSI- (received signal strength indicator-) based node localization scheme is proposed in [17]. The distributed weighted search localization algorithm (WSLA) was constructed, combining a node localization precision classification scheme and weight-based searches. The strategy brings up considerable improvement with respect to accuracy. Meanwhile, the authors of [18] constructed a RSSI-assisted localization scheme using machine learning and Kalman filter. They designed two learning algorithms: ridge regression (RR) and vector-output regularized least squares (vo-RLS), which effectively help reduce localization error. While the radio-analyzed approach is conceptually interesting and incurring little latency, its accuracy apparently depends on existence of obstacles. Furthermore, calibrating and training the model to map RSSI to the target position are potentially expensive. The method is therefore likely unfeasible when the mission takes place in a spacious area. In addition, random changes in the environment potentially affect the accuracy as well.

In a similar approach, the authors of [4, 19] introduced tracking methods based on capturing images of target objects and/or on recording their sound. Upon intensive image and acoustic data analysis, the system infers instant positions of a moving object that falls in the visible/audible range of sensors. Arranging directional imaging sensors in a node-dense grid, the authors of [4] designed a -target tracking and -connected WSN. The system works well on indoor places and on the road, successfully tracking multiple objects. The solution is however expensive and highly object-selective. On the other hand, both acoustic and image data were made use of in [19] to locate target objects. The authors employed the Gauss-Markov mobility model to predict the target trajectory, which helps restrict the suspected region to locate and track. The scheme is therefore highly energy-efficient. However, this approach is in general dependent on central gateway(s) to execute heavy computing tasks. Multimedia communication over resource-constrained WSN is apparently costly.

With respect to usage of on-board sensing electronics, in [20, 21], the authors proposed image-based navigation algorithms. This strategy obviously eases the work of users/operators, given that UAVs provide visual data of targets. Advanced image processing techniques also enable quick object detection and bring up a high autonomy in surveying events. A template matching and morphological filtering combined algorithm is introduced in [20], which allows a UAV to track another cooperative vehicle within the distance of few meters. The tracking machine uses a monocular camera and exchanges data with the target to assure the mission success. Differently, multiple means (monocular camera, GPS receiver, and IMU sensor) were exploited in [21] for a UAV to supervise moving vehicles in multiple levels. The proposed scheme first visually detects a vehicle, then tracks its movement, and eventually processes data to navigate the flying UAV from the ground. However, in all of these strategies, on-board cameras can only scan a limited footprint on the ground. This approach is therefore suitable for tracking vehicles that are moving on the road only. It cannot deal with free movement of objects elsewhere. In this regard, our work opts for ground sensors as means of providing extra data. This allows the system to cover a larger UAV-traceable area. To extend the monitoring range, the authors of [15] let networked UAVs work in cooperation with WSNs to detect and monitor natural disaster events based on images and environment statistics (temperature, smoke, humidity, and light). A multilevel data analysis, advanced image processing in particular, and cloud- (ThingSpeak) assisted ground monitoring/maneuvering subsystem were designed to detect and locate such events as fire in the early stage. While the system is practical and applicable with respect to natural disasters, it is not flexible to tightly track particular moving objects. Moreover, UAVs do not seem to be highly autonomous.

From the data processing and representation perspective, ontology and domain knowledge modeling have been introduced to interpret event occurrences [22, 23]. The semantic presentation technology concentrates on intensively analyzing UAV-acquired data (videos, images) to visually understand the scene. By application of AI techniques for advanced image and video processing, it is able to detect, identify, and label objects of interest, which facilitates the context interpretation [22]. Ontology-driven representation using semantic statements such as those in the TrackPOI schema [23, 24] generates high-level descriptions of the scene, which supports UAVs and human operators to visually detect, differentiate and track moving objects. Note however that this approach does not target at searching any object or navigating UAVs in real time to acquire those data. The data processing operations can only be performed with the assumption that the flying UAVs have already visually found out objects and are now successfully tracking them. Furthermore, ontology-driven computation requires a high concentration of data collected and obviously induces long delay due to heavy overhead generated by AI algorithms and semantic web. In the targeted context our study aims at, objects may move arbitrarily (unlike vehicles on an established road) at a high velocity, which demands instant response of inherently resource-limited UAVs to continuously keep track of. This consequently discourages the application of any heavy-overhead protocol. In reality, objects may require sensed data other than visual media, such as sound and radiation, to be detected and tracked. Thus, the ontology approach cannot be used as a key means to search moving objects and to self-navigate UAVs in tracking them in real-time. In principle, the technology may be employed in conjunction with our proposed self-navigation strategy to facilitate ground operations, but this is out of our study scope.

Remarkably, the authors of [8] introduce an interesting and promising framework in which UAVs are equipped with gas sensors to collect data. The study also devised a self-adaptive team of UAVs for monitoring gas emission events. However, in this approach, if the gas detection data collected by UAVs are not sufficient to fully cover the event area, the positioning and tracking mission likely fails. Preprogramming paths before departure, as mentioned earlier, potentially navigate the UAVs along nonoptimal trajectories. This means that the model works efficiently against only such events as gas emission, where no highly dynamic object is targeted. Relying solely on on-board sensors also excludes capability of monitoring ground objects that do not alter measurement results at the flying altitude. Similarly, the progressively spiral-out scheme [25] presents an exhaustive scanning method to discover a mobile target. A time-varying team of UAVs are coordinated to fly along spiral-out trajectories with the hope that the spiral-bounded area covers its location. Depending on how the estimated confidence area changes over time, some UAVs may leave the team or extra ones are invited to join in. While this scanning algorithm is truly aimed at searching mobile targets, its exhaustiveness costs more flying distance and UAVs than needed. Furthermore, as the UAVs fly along planned paths, regardless of how the target object moves, they do not continuously keep track of it.

In this work, we therefore construct an online (dynamic) self-navigating strategy where a UAV makes use of measurement data reported from ground sensors to navigate itself. Namely, the UAV is steered by referring to instant sensed data, rather than being path-planned before taking off. Its on-board camera(s) or other sensing means is also employed for confirming the presence of the target object and tracking its movement. The problems we attempt to address are (i) how the UAV should be self-navigated to find out and catch up with the moving object in different levels of data availability and (ii) how the available sensed data are processed to create the best navigation hints. Our technical contributions reported in this paper accordingly include the following: (1)We build a system model in which UAVs cowork with sensors to locate and track events associated with moving objects. The UAVs collect data from preexisting ground sensors as they are flying. In particular, the UAVs may also proactively deploy sensors on the fly to supplement sensed data about the moving object if necessary. The UAVs are directed based on processing data acquired on the move and on captured images to detect the object(2)We propose an online self-navigation strategy based on the above data sources. The dynamic steering scheme is carried out without referring to any centralized database. A local bivariate regression is constructed to formulate the scalar field from measurement data, which supports calculating/predicting the best flying direction regularly. The work presents an efficient transformation of the bivariate function. The local regression can accordingly be implemented by applying known algorithms for univariate polynomials(3)Dealing with object motion, we introduce a “wait-in-front” tactic for a UAV to catch up with the moving object quicker. Namely, by observing its motion, the UAV heads toward the next predicted position of the object, instead of its current one. This means that the flying direction is deflected from the gradient vector by a calculated angle(4)We present a maneuvering framework in which the UAV flexibly changes its path adjustment policy depending on gathered measurement data. Specifically, UAV steering may switch from gradient vector-driven to heading straightforward, as well as from searching to tracking or the other way around. The goal is to save communication load and edge processing cost aboard. Besides, the flying UAV autonomously restores searching operations when the object has moved away from its tracking range

The rest of this paper is organized as follows. Section 2 describes the system model, highlighting the integrated UAV-WSN structure and UAV self-navigation strategy. It also explains how an object influences the measured parameter, followed by a comprehensive protocol for searching and tracking missions. Section 3 details a local bivariate regression formulation for calculating the gradient vector at any point. Then, algorithms for dealing with moving objects are presented in Section 4. A detailed description of how UAVs are self-navigated in an online manner to catch up with them is also presented. Section 5 subsequently analyzes overhead of our self-navigation strategy in terms of complexity and communication load. To demonstrate the soundness of the proposed framework, we report NS-3 and Matlab-based cosimulation results in Section 6. Statistical performance of the self-navigation in terms of accuracy and costs is concretely discussed therein. Finally, Section 7 draws our conclusions.

2. System Model

Being composed of UAVs and wireless sensors, the proposed system is visually indicated in Figure 1. Measurement data are collected from ground sensors, both directly and via multiple hop paths. Occasionally, a UAV may drop down sensors where and when needed to get more data for locating an object. In addition, the following conditions hold: (i)The UAV knows its current position and that of sensors from which it has just received measurement data(ii)The terrain under surveillance is relatively flat, so that the problem of searching and tracking is presented in 2D space(iii)The UAV at standby state is signaled to fly searching once an object presence is suspected but does not have sufficient data at once to know its exact location

Once a detectable object appears and moves around in the area covered by the WSN, we say that an event occurs. The presence of the object (e.g., vehicle, animal, and radiation source) will cause abnormal changes in some measurement data within its surrounding region. A UAV on its surveillance mission needs to sense these changes. Without referring to any centralized database of sufficient data, the UAV has to rely on data sent from ground sensors to be self-navigated. Due to imperfection of the ground sensor network, it should collect data on the move to gradually know better about the event. Understanding the rule of changes associated with the object appearance, the UAV may appropriately shape its trajectory in a dynamic manner.

2.1. Object and Event Description

Conceptually, an event is the circumstance in which one or a few remarkable objects appear in the area covered by the large-scale WSN. Its occurrence triggers the system to detect, locate, and track moving objects. The presence of a target object results in the fact that some parameter (either primitive or compound), whose value is position-dependent, excessively increases or decreases toward extreme values [8]. Let us denote the parameter, which will be measured by ground sensors, as . It is assumed that the impact of the object presence on the parameter values at surrounding positions is transient; i.e., it quickly disappears when the object has moved away. In reality, such objects may be vehicles, machines, animals, intruders, radiation sources, etc. that emit light, radio, radiation in general, sound, or the like. Apparently, sensed data will reflect instant intensity of these signals. Without loss of generality, we assume that the parameter values get higher in the object-affected area [8, 11, 26]. Figure 2 shows a typical plot of values over the object-affected ground space at some time instant, which forms a cone . There exist three sections separated by two thresholds and : definitely “no” (), likely (), definitely “yes” (), which literally indicate the possibility of object existence thereon.

The projection of the cone top onto the ground plane should be the center of the object if it is present. As soon as the searching UAV acquires sufficient sensor data, it gets to know the position. The UAV is then navigated directly toward the object. In reality, this good luck however does not happen right at the beginning of the search due to the facts that (i)ground sensors do not always successfully transmit their data to a distant centralized point(ii)the object-affected area is large, taking time for the UAV to capture data around cone peaks(iii)occurrence of the event itself may, to some extent, adversely affect the sensor network connectivity, which hinders centralizing data

As a result, the UAV only obtains data from its nearby sensors, which likely excludes ones close to the object in early time of the search. A question raised here is, given that limited information, how the UAV should be directed to move as close as possible to the object. Its answer would be that the UAV just flies along the direction that observes the highest spike of measurement values.

If multiple events occur in parallel, separate objects should be observed. In such a case, it simply dispatches one UAV to search and track each. To identify and label each object, UAVs essentially rely on image-based recognition or on special on-board electronics that process, for example, ultrasound or radiation signals. Another alternative is to track the “cone” associated with each object. In principle, separate objects should create different shapes of plot. Furthermore, by tracking each moving object tightly (by continual data updates from ground sensors), the system may also predict its position better and hence alleviate the confusion probability.

2.2. Online Self-Navigation Strategy

While the UAV is flying, data reported from connected sensors let it model a scalar field represented by parameter . Formulation of acquired measurement data can be made at least for the space nearby: , where is the current UAV position projected (orthogonally) onto the ground (hereafter loosely called UAV position/location). The object place practically observes a sudden change of gradient [8, 26], or a climax of the measured parameter. Obviously, when the UAV always flies along the gradient vector, it definitely reaches the climax in the end.

Knowing the explicit expression of function , the UAV can determine the gradient vector of scalar field at its current position :

Once the gradient vector has been computed, the UAV is dynamically self-navigated according to the velocity vector. It can be expressed as where is the proportional factor that depends on the instant ground speed of the UAV and and are the turning angle and its maximally allowed values, respectively.

However, the UAV gets sensed information from ground sensors that are not too far way; the expression of is only true at its surrounding space. When the object is still distant, the expression-valid domain does not cover the object position. This means that, with a limited data from ground sensors nearby, the UAV knows only the closer part of the cone while moving. Repetition of formulation after each certain moving length is consequently required. Namely, it takes time for the UAV to locate the peak position of the cone . Upon getting to know the peak, the UAV is signaled peak found as seen in Figure 3. It then simply heads, as fast as possible, along the straight line thereto (state straight line). There are two possible cases when the UAV reaches the peak: (i)The object is successfully found thereat. Confirmation of the object presence may be realized based on on-board sensing devices. For a simplified presentation, we hereafter assume that images captured by on-board cameras are processed for this purpose. Signal found may also be generated if some measured values exceeding are received. The UAV now terminates its searching phase and switches to tracking state as seen in the figure. Its self-navigation behaviors in the tracking mode will be later described in Section 4. Briefly speaking, the process mainly relies on the on-board camera(s), instead of ground sensors(ii)The UAV does not detect any object; it is just a local peak . Another possibility is that the object has already run too far away. In either situation, the UAV is signaled not found and moves to work in state discovery. As getting extra ground data, the UAV may be sometime informed of abnormally increasing values (greater than ) at a place. It then returns to the searching state on signal suspected. On the other hand, if the UAV luckily finds out another peak thanks to further ground data, it simply reiterates another straight line journey

During the tracking stage, if the object moves out of the localizable range, signal missing is generated, bringing the UAV to the discovery state as well. As such, gradient vector is not estimated all the time of searching and tracking. Its calculation is necessary only when ground sensor data are insufficient. Even in the searching phase, the UAV will cease the periodical gradient update when a peak is located. These behaviors are later embodied in an algorithm devised in Section 4.2. It is also noticed that, the UAV normally stays at the standby state, being ready for taking off. When a value of is received, it instantly departs for the source position. Besides, during its mission, the UAV should automatically land on if the tracking mission ends, or it is about to run out of energy or fuel.

To assure that the UAV is self-navigated heading to expected positions, even in case of wind and atmospheric disturbances, application of a supplementary adjustment model [27, 28] can be incorporated. With respect to positioning, if the UAV relies on satellite-based positioning systems (e.g., GPS), the impact of bad weather (e.g., fog, smog, and rain) on the accuracy is said to be insignificant [29]. Furthermore, it may also improve the accuracy by correlating its own data to the sensor-reported samples. In confirming the object appearance, severe reduction of visibility due to bad weather may degrade camera images. To overcome this challenge, the UAV may be equipped with supplementary on-board sensors that measure ultrasound, radiation, or the like, to detect the target object better.

3. Data Formulation and Processing

In the searching phase, the UAV is flying in the object-affected region where values of are greater than . To update the gradient vector periodically, it keeps collecting data from ground sensors. Based on the data, function is formulated, whose expression is valid only for surrounding positions. This section presents an appropriate nonparametric regression scheme for the purpose. For the ease of reading, Table 1 explains the meaning of all important symbols.

3.1. Formulation Expression of Ground Data

As stated in the previous section, is a bivariate function of coordinates and on the 2D ground plane. There are theoretically two ways to formulate at a certain point: interpolation and regression. We choose local regression for the following reasons: (i)The number of measurement samples may be large, making polynomial interpolation intractable. New measurement samples may come in at a high rate, and their values are time-varying(ii)In reality, the impact of an event on the measured parameter is maximum at its center and gradually decreases along the distance therefrom. This implicates a necessity of weighing measurement values, taking into account their source position.

Given that at present time , parameter , whose value is a function of coordinates and , is expanded as a -degree polynomial bivariate function . It can be expressed as where represents the noise influence on measurements. In words, is the sum of monomials arranged in an increasing total order of the two variables. For the ease of local regression at edge processing, we propose to transform in Equation (3) into 1D indexing for .

Lemma 1. There exists a bijective map between two-dimension arrayand one-dimension one, whose index also locates corresponding terms of.

Proof. Let us define scalar , and vector . Coefficients characterize polynomial , being calculated by local regression. Obviously, the size of and is . We now number elements of them by a single index . Looking at expression of in Equation (3), one can realize that Then, and are rewritten as and , respectively. Note that and that , and must be nonnegative integers. Given a value of , the corresponding value of is the maximum integer that satisfies Equation (4). We can accordingly locate the th term of (not counting the noise)—monomial —where and indexes are calculated as Once values of and are found out, the th term of , i.e., , is determined. This completes the proof.

It is then inferred from Lemma 1 that . Sensor data may practically incur noise. We do not intensively address the issue in this work, but assume that the noise is position-independent. At the same time, Kalman filter also helps alleviate the white-noise impact on measurement data [18]. Furthermore, the information accuracy, in case of bad weather (e.g., fog, smog, and rain), can also be improved by raising the transmission redundancy. Dropping down extra sensors from the UAV and increasing the frequency of data reporting, among other alternatives, may be reinforced for this purpose. The noise accordingly does not influence the calculation of gradient, for which can computationally be considered part of in subsequent formulation steps. At each point P, coordinates of the gradient vector are consequently set as

Finally, the gradient vector can be calculated out when coefficients are determined. Theoretically, the vector may directly be estimated as presented in [30]. However, this method provides only the average slope value for a long range, rather than partial derivatives at a discrete point. Furthermore, the method apparently induces heavy computation load. The next section details a local regression strategy to calculate coefficients of .

3.2. Measurement Data Regression

Calculation of partial derivatives of function at point P desires presence of sensed and/or predicted values of at positions nearby. How much such a measured sample at point P influences the calculation depends on their Euclidean distance (). The closer to each other the two points are, the heavier the impact can be. This is quantified by the weighing factor of . Lemma 1 helps translate the bivariate to univariate-wise representation. It is expressed as which eases the application of local univariate regression to formulate field . Given a value of , the corresponding monomial of is determined according to Equation (5). Coefficients can be estimated based on a weighted least square method. At first, loss function is given by where is the weighing function associated with the UAV’s current position. It is supposed that ground sensors and dropped-down ones have just reported values of parameter for at least positions. Let us denote a vector and a diagonal matrix whose coefficients are given by

We next extend scalar to a matrix as follows:

Subsequently, in Equation (8) is rewritten as

Minimizing expressed in Equation (11) by letting its partial derivative with respect to be 0 as in [31, 32], the solution is determined as

As the number of coefficients in equals , for the solution in Equation (12) to be deterministic, the UAV must acquire at least measured samples in its vicinity as where and are, respectively, the total measurement samples available and whose fraction was used as input for the regression.

Regarding the weighing matrix , values of in the vicinity of the current UAV position are set to comply with the Gaussian distribution as where and are, respectively, variances of and .

4. Dealing with Object Motion

If the target object does not change its location, the UAV is directed following the gradient vector until it finds out a peak as described in Section 2.2. While searching and tracking a moving object, the UAV may nevertheless observe movement of its measurement cone. In this case, it needs to adjust its flying direction adaptively to the movement. This section presents how the adjustment should be made, followed by a description of tracking behaviors.

4.1. Updating Motion Information of Objects

Cone of a static object is essentially immobile. Updating its information from ground sensors underneath is just a matter of enhancing the known part of the cone gradually. A searching UAV ceases its periodical update when the known part covers the position . In reality, the target event may nonetheless be associated with a mobile object. Recall that the object is detected by parameter that is quickly restored to normal values once it moves away (e.g., light, radiation, and sound). The cone shape in such a case remains intact, despite its movement on the horizontal plane. Assume that the movement in the th interval is characterized by a vector as illustrated in Figure 4. Here, P is the position of the UAV at the beginning time of the interval, and P is its expected image at time , where is the period length of each update interval.

Coefficients are time-varying and need updating regularly while the object is moving. Each update is performed right before the navigation decision for the interval is made. Because the cone shape remains unchanged, the following equation is always true at any time instant : where and are, respectively, projections of vector onto axis and axis . Equation (15) also helps the UAV detect the cone movement in the previous interval , represented by vector . At the beginning time of the th update interval, the UAV is at U, whose gradient vector is . If the object stood fixed throughout the period, the UAV would move along . However, because the object is predicted to move according to vector , the UAV should move along the diagonal of the parallelogram, reaching U at the end of the period. At that time, the system also checks the value of at position P against Equation (15). Calculation of deviation angle between gradient vector and motion vector is needed to direct the UAV thereto. At first, recalling Equation (2), distance PG is estimated as if the object did not move:

The actual flying distance UU within the interval is subsequently computed as where .

Finally, deviation angle can be found at the UAV as

The UAV is then self-navigated according to this deviation angle, i.e., along vector , instead of gradient .

4.2. Tracking Motion Objects

Once finding out the moving object, the UAV switches to tracking mode as depicted in Figure 3. As stated earlier in Section 2, on-board sensing devices, embedded cameras in particular, continuously help the UAV chase the object. To let the object be within its visible range, the tracking UAV must (1)move to keep the orthogonal projection of the UAV onto the ground close to the object(2)capture and process images frequently enough, so thatwhere denotes the aforementioned capturing and processing interval, is the distance between the UAV projection onto the ground and the object center, and is the scanning radius of the on-board sensing devices. Figure 5(a) illustrates this rule.

On the other hand, if the object moves out of visible range, violating the above rule, the system falls in one of the following cases: (1)The object moves absolutely far away, triggering signal missing. This brings the system to state discovery as indicated in Figure 3. The UAV then waits for further data to move back to the searching state, or more luckily, directly to any peak found(2)The system can still position the object center thanks to sufficient data provided by ground sensors. This means that the object is still marked found

In the second case, if the UAV moves in the right direction in the next update interval and the scanning radius is large enough, it quickly “sees” the object again by the camera. Figure 6 illustrates these behaviors, further detailing state tracking.

Lemma 2. As soon as a tracked object gets out of the visible range, the sufficient condition for the UAV to get it visible back is thatwhere and , respectively, denote the radius of object localizable and visible ranges.

Proof. Let us assume that at time instant , the orthogonal projection point of the UAV onto the ground and the object center are located at U and P, respectively, as illustrated in Figure 5(b). After an update interval , the object moves to the next positions P at time .
In the worst case, the object center is located right at the border of circle (U, ), and it quickly moves away from the visible range along the radius at velocity . This implicates a transient invisible time. Now that the on-board camera cannot see the object anymore, image capturing and processing do not help. The system must therefore rely on ground sensor data to assure its object localization. If the data are still available in the range then the object center is still localizable. In addition, as soon as the next update comes, the UAV gets to know position P, which is at distance of away from the previous location. The UAV accordingly directs itself to P, getting the camera-scanned region filled in gray as seen in Figure 5(b). Assuming that within a short update interval, the object does not significantly changes its velocity, the sufficient condition for the object to be seen by the camera is that Inequalities (21) and (22) are obviously true if constraint (20) holds. This completes the proof.

Note that constraint (20) in Lemma 2 is not a necessary condition; i.e., the UAV may still fully keep track of the object even if it is violated. For an example, in case , the UAV can still recover the camera-visible state right in the next update interval, if the object moves slowly or within a limited turning angle.

Eventually, Algorithm 1 summarizes the process of chasing the moving object. It details the FSM previously presented in Section 2.2. [Re]calculation of flying direction is executed based on both gradient and deviation angle. The algorithm also indicates how the UAV changes its steering tactic in different operation modes. Apparently, escaping from gradient-based driving does weaken the need of data acquisition and processing.

/* initialization */;
= false;
= false;
=0;
/* searching */;
while () do
= = ;
 execute local regression to determine ;
 determine gradient at according to Equation (6);
if (a peakis located) then
  ;
  break;
end
 calculate according to Equation (16);
 locate from Equation ;
 estimate the deviated angle according to Equation (18);
 calculate according to Equation (17);
 navigate the UAV according to angle ;
end
calculate coordinates of peak ;
navigate the UAV in a straight line to ;
capture ground image;
ifobject_found in imagethen
;
 switch to tracking mode;
else
 switch to discovery mode;
end
4.3. On-the-Fly Deployment of Sensors

While searching and tracking the moving object, the UAV expects to have sufficient collected data from ground sensors as soon as possible, so that the very object position is pinpointed accurately. Note however that sensors may be sparsely distributed or partially corrupted due to the event occurrence, raising the need to deploy on-the-fly ones by the UAV itself. The following positions should be considered dropping sensors down nearby: (i)Predicted position of P when the UAV is at U(ii)Expected arrival point U(iii)Predicted object center position(iv)Positions in the vicinity of the UAV to meet inequality (13)(v)Surrounding positions to augment satisfaction of the conditions claimed in Lemma 2

At the beginning of each update period, if the system finds out the above points within the drop-down radius of the UAV, its carry-on sensors will get ready. Should more UAVs join in the mission, they should also be dispatched to cooperatively drop down. A dropping schedule may nonetheless be discarded if existing sensors nearby already transmit sufficient measurement samples.

5. Overhead Analysis

Now that the searching and tracking strategies have clearly been clarified, let us evaluate their complexity and data communication load. It can be inferred from the analytic model and algorithms presented in Sections 3 and 4 that the overhead is influenced by the UAV steering policy, operation mode, object mobility, data updating frequency, and regression parameters.

5.1. Complexity

The heaviest load pertains to matrix chain multiplication in Equation (12) to find out coefficients of vector . This is in principle executed in each update interval as stated in Section 3.2. Recall however that it is carried out only in early searching stage. As soon as the UAV successfully locates the object center, the UAV is directed to the location without any extra calculation. In state tracking of FSM depicted in Figure 3, the UAV purely processes discrete ground images.

As indicated in Table 2, the complexity is . Given that as explained in Section 3.2, the coefficients can be estimated in polynomial time of the degree of and the number of sensor measurement samples (i.e., ) used for the regression. Our simulations shows that the object center is found within just a few ten update intervals, triggering signal peak found as indicated in Figure 3. The UAV subsequently stops regression computation. Note also that the cumulative calculation load depends on the frequency of data update as well.

If an on-board camera is employed to confirm the existence of the object, training tasks must be executed followed by periodical object detection. UAVs nowadays are computationally powerful enough to get the jobs done [33]. They can anyway offload heavy training tasks to a cloudlet if wishing to reduce the edge processing load [34]. Upon acquiring the training results, they may easily and quickly carry out detection jobs.

5.2. Data Communication Load

Periodical local regression to update the gradient vector requires availability of measurement data from ground sensors. As mentioned earlier in Section 3, at least sensor samples are needed for each local regression instance of . Data volume to be collected from ground sensors per regression is thus estimated as where denotes the length of message carrying a measurement sample. Referring back to Equation (13), we may now calculate the total data throughput flowing from ground sensors to the UAV:

Note that the above data amount is desired only before the object position is located. As soon as signal found in the FSM depicted in Figure 3 is generated, the UAV, without regression update, flies straightforward to the detected center position.

6. Performance Evaluation Results

We verified the proposed strategy by constructing NS-3 and Matlab co-simulations. Data communication and formulation were realized in the well-known network simulation tool NS-3. Meanwhile, Simulink UAV Library for Robotics System Toolbox™ was employed to adjust and validate flying trajectories upon each self-navigation operation. Specifically, how the exact heading trajectory looks like in each update interval is shaped by the Simulink tool. With the application of the module, the navigating calculation has already taken into account external factors such as wind and atmospheric disturbances [27, 28]. Networking configurations and system setup are concretely described in Table 3. In all the simulation scenarios, sensor-measured data also underwent a Kalman filter-based preprocessing stage [18] to be cleaned.

Simulation results show that expected trajectories basically agree with those adjusted by Simulink, as an example seen in Figure 7. The UAV reached its expected arrival position in all update intervals.

While our proposed framework (hearafter called online self-nav) is novelly different, we attempted to simulate the best matching schemes we know, for comparative study. At first, the most promising state-of-the-art approach we know, self-adaptive team [8], was shaped to fit the context. As mentioned in Section 1, this scheme solely relies on on-board sensors to detect a gas emission event. For fair comparisons, the team of five UAVs thereon were, however, assumed to also communicate with ground sensors while they were flying. In addition, we also resimulated another relevant approach in organizing a dynamic team of UAVs for searching and tracking the event, to which further comparison is made. Namely, the PSO- (progressively spiral-out optimization-) based algorithm devised in [25] was numerically evaluated in the same system context. In this searching method, a time-varying team of UAVs were path-planned according to predefined rules for exhaustive scanning. They basically flied along spiral-out trajectories, whose bounded area is expected to cover the target moving region.

6.1. Success of Online Self-Navigation

We let the UAV depart at the origin (0 m, 0 m), whereas the object initially arose at position (500 m, 1200 m). The object subsequently moved at variable speed within 10 m/s.

A typical simulation instance of searching is depicted in Figure 8. It shows how the UAV successfully approached the object. Afterwards, it switched to the tracking state (as previously mentioned in Section 2.2). Remarkably, the distance between the projection of the UAV onto the ground and the object position got gradually shorter during the search. At the update period of 8 s and object velocity of 8 m/s, the UAV eventually reached the target after 25 update periods (around 200 s). The success consistently occurred in all the simulation instances we ever ran.

6.2. Searching Efficiency

Searching time and flying distance are key parameters to evaluate how well the UAV finds out the object. To make a fair comparison between our online self-navigation strategy and the self-adaptive team of five UAVs, we let our UAV be stationed at position (0 m, 900 m). Before taking off, the five UAVs of the team were, respectively, arranged at (0 m, 300 m); (0 m, 600 m); (0 m, 900 m); (0 m, 1200 m); and (0 m, 1500 m). The object initially arose at position (900 m, 900 m). Meanwhile, in the case of the aforementioned PSO scanning approach [25] (hereafter referred to as “PSO” for short), the team was initially formed by four UAVs. At the beginning of the search, they were located evenly at circle, centered at position (500 m, 900 m), with a radius of 20 m. During the search, the team called on another UAV to keep the search exhaustive. All the UAVs were also equipped with 30 m scanning cameras.

It can be inferred from Figure 9 that it is basically more costly, in terms of time and of flying distance, to find out the object when the gradient update interval (or update period) gets higher. This does implicate a trade-off between computation/communication and flying costs. However, the impact of the interval on the time and distance is not so significant as that of object mobility.

The figure shows that both temporal and spatial lengths are extended exponentially as the object velocity gets higher. Throughout all 25 scenarios formed by a combination of 5 update period (2 s, 4 s, 6 s, 8 s, and 10s) and object velocity (2 m/s, 4 m/s, 6 m/s, 8 m/s, and 10 m/s) values, we observe a consistent outperformance of our online self-navigation strategy compared to the other. As depicted in Figure 9, it took about 230 s (23 update periods) to reach the target object in the worst case (update period of 10s and object speed of 10 m/s). On the other hand, the self-adaptive team spent about 260 s on locating the object, which is about 13% longer. In this case, the distance difference is 0.54 km (3.82 km versus 3.28 km), meaning 16% longer. Note however that the cumulative flying distance of all the UAV team is up to 19.12 km, gravely longer than 3.28 km in ours. At the same period and speed, PSO observed a total fly distance of 4.41 km per UAV, but the team eventually failed to find out the object. The reason is that its velocity exceeded the limit set by the spiral-out radius and UAV mobility [25]. If the velocity was lowered to 8 m/s, it caught the object after about 230 s, which is 60s (or 35%) longer than ours. In this scenario, the average per-UAV flying distances of the online-self-nav, self-adaptive, and PSO strategies are, respectively, 2.38 km, 2.78 km, and 3.36 km.

6.3. Tracking Accuracy

To evaluate how well the UAV keeps track of the moving object, we regularly collected their location data in the whole tracking process. Let us define tracking deviation at a time instant as the distance between the projection of the UAV onto the ground and the object center. Quantitatively, the average () and maximum () deviations are estimated as follows: where and , respectively, represent locations of the UAV and object at update interval and is the number of positions used to calculate the tracking deviation.

As indicated in Figure 10, the deviation values essentially depend on object mobility more significantly than they do on update interval. In general, it is more challenging for the UAVs to track as the speed and the interval get greater. Specifically, in our proposed self-navigation strategy, the maximum and average deviations are only 0.01 m at an interval of 2 s and velocity of 2 m/s. They reach 4.91 m and 4.33 m at 10 s and 10 m/s, respectively. The deviation levels are anyway well within the scanning area of the on-board camera, whose radius is 30 m as indicated in Table 3.

The outperformance of our online navigation scheme, as seen in the chart, is clearly conclusive, with respect to both deviations. All the simulation instances observed better tracking accuracy in 25 pairs of update period and object velocity values, compared to the self-adaptive and PSO strategies. The maximum difference between the two pertaining to the pair of 10 s and 10 m/s is 17.55 m and 7.39 m on for and , respectively. In the least challenging case (the pair of 2 s and 2 m/s), the self-adaptive team strategy incurs two deviations of 0.98 m and 0.48 m, which is far greater than ours—only 0.01 m as stated above. Meanwhile, the PSO team, respectively, suffers the deviations of 1.03 m and 1.26 m, which are gravely higher.

Overall, the online self-navigation framework brings up clear reduction with respect to searching time, flying distance, and tracking error, as listed in Table 4. As shown in the table, the worst performer was the PSO team.

6.4. Data Exchange Volume

Recall that data exchange between ground sensors and the UAV mainly occurs in the searching phase, when the object center position has not been located. As stated in Section 5.2, during the phase, the exchange volume depends on update interval and the number of measurement samples transmitted. Basically, the amount should monotonically decrease as the update interval is lengthened. With respect to object mobility, the load also gets heavier as its velocity increases. This can be explained that more ground sensors gets involved in providing measurement data to cover a larger motion area.

Figure 11 truly reflects this argument. With a message length of 256 bytes as indicated in Table 3, the data amount reaches 9.17 MB, 5.99 MB, 4.42 MB, 3.54 MB, and 2.97 MB, corresponding to object velocity of 10 m/s, 8 m/s, 6 m/s, 4 m/s, and 2 m/s, all at a minimum update interval of 2 s. When the update takes place less frequently, the amount is clearly reduced. The reduction intensity nonetheless goes down against the increasing order of update period. Note that the plotted statistics also cover the tracking phase, during which data exchange occurs sporadically (as explained in Sections 2.2 and 5.2).

6.5. Energy Consumption

In reality, a UAV spends its energy on flying motors, regression computation, and data communication with ground sensors. The first part accounts for major consumption and depends on motion parameters and flying time. The consumed power is roughly approximated below: where is the consumed power when the UAV hovers and is a proportional factor. In this study, we do not deeply analyze consumption on taking off and landing, since the operations are assumed to take place distantly from the searching and tracking region. In the simulations, we set and  W. Figure 12 plots consumption statistics for gradient update periods of 2 s, 4 s, 6 s, 8 s, and 10 s. Different velocities of the object were also checked to make the results more comprehensive.

The chart shows that the total energy consumed gets basically higher as the object moves faster. The gradient update interval nevertheless does not adhere to the same rule. This may be attributed to random vibration of motion trajectories. In contrast, it can be noticed that the consumption amount is drastically boosted against the object mobility. In all the simulations, the values range from 24.13 KJ at an update period of 2 s and object velocity of 2 m/s to 68.45 KJ at 10 s and 10 m/s, respectively.

7. Conclusions

We have presented an online self-navigation framework in which UAVs regularly update measurement data on the move. Periodical formulation of acquired data brings up helpful hints to steer UAVs toward moving objects. The local regression-based formulation enables calculating the gradient vector at any instant position. The vector is a good reference for steering the vehicles, especially in early searching time, when available information about targets is limited. Once ground data are sufficient, the formulation also helps instantly locate the exact position of objects. Flexibly switching between searching, tracking, and discovering states, the comprehensive maneuvering strategy considerably reduces both computation and communication loads.

Numerical results of NS-3 and Matlab cosimulations consistently agree with the theoretical expectation. Our proposed framework clearly shows its outperformance in searching and tracking. Compared to the best state-of-the-art method we know, reduction of over 15%, 18%, and 80% is, respectively, observed with respect to searching time, flying distance, and tracking deviation. Simulation statistics also indicate how the supervising performance depends on object mobility and frequency of data update.

Data Availability

Simulation experiments were carried out using NS-3 and Matlab. All data, except those of sensor node configurations and motion trajectories, used to support the findings of this study are included within the article. The latter are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this article.

Acknowledgments

Our research was partially sponsored by the Ministry of Science and Technology of Vietnam under research project “Research and development of Internet of Things platform (IoT), application to management of high technology, industrial zones,” mission code KC.01.17/16-20.