Abstract

Humans have a fundamental ability, that is, to share vision among each other to fulfill common goals, which cooperative UAVs do not have. The difficulties mainly lie in the homologous mathematical description of humans and elusive experimental practice. This paper proposed a parallel multiview splicing on clouds, which first review both theory and practice studies in UAVs. These terms are then reconsidered from humans’ vision sharing. Next, a conceptual model of parallel multiview splicing on clouds is proposed and the mathematical deduction if fulfilled. Furthermore, an experimental cooperative UAVs platform is built to practically implement the algorithms. Both the simulated and practiced results have validated the feasibility of our method. Finally, a general discussion and proposals for addressing future issues are given.

1. Introduction

The unmanned aerial vehicle (UAV) combat system plays an important role in acquiring information superiority, implementing precise strike, and completing fast combat tasks in current rapid combat and information war [1]. Especially the intelligent UAV, which integrates artificial intelligence to perceiving environment, making attack strategy and assessing task, etc., can lead to the initiative and victory in the war [2].

However, the task accomplishment of single UAV is often unsatisfactory. When a single UAV invades an enemy-occupied area, it often fails to complete effective attack due to its own load limitation, enemy interference, and interception attack. Therefore, it needs cooperation among multiple UAVs to guarantee the task completion [3, 4].

With the development of technology and equipment, the air confrontation between big powers will be in a state of high intensity. The traditional methods consider manned vehicles as the main body of future aerial combat, until the significance of cooperative UAVs (Co-UAVs) is discovered. The cooperative UAVs show a new type of combat effectiveness [5], which has the following advantages.(a)Intelligence advantage: Co-UAVs have distributed sensors, which can cooperate with each other to achieve precise targets positioning. The networked operations can share information among UAVs, achieving “Any one knows, everyone knows” in the swarm. The intelligence sharing lays foundation for the realization of cooperative attack.(b)Speed advantage: Co-UAVs can automatically decompose tasks online according to battlefield situation and give the subtasks to corresponding vehicles. The assigned UAVs can react quickly and coordinate with other operations such as interference suppression, fire strike, and damage assessment, which shortens the “perception-decision-action” cycle and speeds up the combat process.(c)Cooperation advantage: the cooperation between UAVs can cooperate autonomously and adaptively, which makes the swarm act as single one. As a result, the uniform intensive attack and dense defense can be achieved.(d)Quantity advantage: Co-UAVs usually use low-cost unmanned platform, which is small in size and large in quantity. It can maintain the high-pressure situation and continuous attack towards the enemy, paralyzing the defense system of opposition rapidly and achieving the operational purpose in the shortest time.

As a subversive modern attack strategy against the enemy, cooperative UAVs have been regarded as the core to triumph. Especially, the swarm intelligence (SI) of Co-UAVs is widely applied as the key technology to win the future combat [6].

In theory, Suresh and Ghose [7] proposed a self-adapting ground attack strategy for UAVs by establishing a path function within the detection range. They combine reconnaissance, interference, and autonomous attack to build an adaptive ground attack strategy for Co-UAVs. Luo et al. [8] propose an online-offline integrated cooperation strategy of UAVs, which uses offline expert decision-making to analyze battlefield environment so as to establish the environmental impact map; it uses online robust decision-making model to evaluate the scenarios faced by each UAV so as to adopt the best robust attack action. Wang et al. [9] tries to find the best strategy of Co-UAVs by using the Radial Basis Function Neural Network (RBF-NN) and to evaluate the performance of cooperation. Also, an alterable neural network is introduced to search the precise candidate feasible solution set, which improves the efficiency of the RBF-NN. In [10], an interval consistency model based on an auction algorithm is proposed, purposing on solving the consistency problem of Co-UAVs, making UAVs reach the target at the same time.

In practice, as the Research Laboratory of the United States Air Force (USAF) showed in 2002, the key to success in future complex battlefield is to use multi-UAVs, which includes searching and attacking, investigation and suppression, psychological warfare, and tactical restraint [11]. Co-UAVs are the breakthrough point of future unmanned warfare. In the subsequent research of USAF, hundreds of simulation experiments were conducted to simulate the interception of Co-UAVs’ attacks to Aegis air defense system [12]. The results show that the defense system is difficult to intercept all UAVs and the defense system has been repeatedly broken through, which indicates the superior attack performance of Co-UAVs. In 2015, the Defense Advanced Research Projects Agency (DARPA) published the “Gremilins” project, which plans to develop partially recoverable Co-UAVs for reconnaissance and electronic warfare [13]. The Gremilins can defeat the enemy by suppressing missile defense system, cutting off communication, and attacking the enemy’s data network based on a large amount of UAVs. In 2016, China Electronics Technology Corporation (CETC) firstly established Co-UAVs test prototype in China and verified the cooperative principle of 67 UAVs. In 2017, 119 fixed-wing UAVs’ flight test was completed by CETC [14].

Both the theoretical and practical research studies indicate that the Co-UAVs have become the winning force of battlefield, which has the ability to change the game rules in the future [15]. However, the former research mainly focuses on the preplanned strategies, which means the ground attack strategy is preestablished before the UAVs arrive in the battlefield. It is very hard to preplan all the scenarios exhaustively, for the battlefield is unknown (or partially unknown) in advance.

Here, a human cooperation inspired approach of Co-UAVs is presented. We first take an explanation to goal-oriented cooperation of humans, especially the strategy making based on vision sharing. Then, a human-like model called parallel multiview splicing on clouds (PMVSC) which incorporates these biobehavioral-science insights in a structured cooperative system of UAVs. In addition to the development of PMVSC, we applied the model to a variety of ground attack tasks for multitargets that required mutual cooperation of UAVs. Finally, PMVSC is experimented in a real scenario (in which there are two distinct kinds objects to test the precise processing performance of Co-UAVs for multitargets) based on the experimental multi-UAVs platform.

2. Goal-Oriented Cooperation of Humans Based on Vision Sharing

The cooperation of humans (CoH) has been illustrated by social psychologist Lewin et al. [16, 17]. He pointed out that humans’ cooperation is a complex group behavior () which is affected by internal individuals () and external environment ():where represents the behavior set of individuals and is the total amount of individuals in group. is the internal conditions and characteristics of individuals, which consists of various physiological and psychological factors, such as physiological needs, physiological characteristics, ability, and personality. is the external environment around every individual.

The Lewin CoH model reveals the general principles of human behavior to some extent. However, it is a passive cooperation model with no clear goals. Goal-oriented behavior is the process of seeking to achieve general goals of group. In a cooperative mission, every individual has his own task; they work independently as well as parallel to fulfill the general goal. So, equation (1) can be revised as follows:where represents the group goals, which is composed with each individual goal.

Take a typical scenario, as shown in Figure 1, for example. The general goal is to find all the objects (the red circle in Figure 1) in the environment, but there are obstacles blocking the sight. Each individual can only see local objects and environment (the translucent vision). They share their visions to get the overall environment so as to consult together to get a proper objects assignment.

3. Goal-Oriented Cooperation Mechanism Based on Vision Sharing

3.1. Outline of Parallel Multiview Splicing on Clouds

A graphical representation of our proposed architecture is given in Figure 2, which proposes that perceiving, cognizing, and assigning targets upon UAVs’ cooperation system in environment is like in the case of goal-oriented cooperation of humans based on vision sharing, and each individual is responsible for specific target, together to fulfill the overall goal.

In PMVSC, the targets (including the true and the false) are firstly perceived by UAVs and each UAV can only know the targets in its own field of vision (FoV). There are several UAVs over the target environment, detecting the targets by onboard cameras. Though an UAV can get local information through the perceive module, it cannot remove repeated targets in group. Each UAV uploads its perceived information in FoV to clouds through the vision sharing module. The vision sharing module preprocesses the detected environment information of respective UAVs, and then the separated FoV are combined to make a full and detailed environment in a single map. Next, the entire map is transferred to the cognize module to distinguish whether the targets are true or false. The valuable and true targets are necessary to be attacked, while disguised and false targets not. Finally, the information of true targets is delivered to the next module, which is responsible for task assignment and path planning.

For the PMVSC architecture, to achieve such complicate processes, a number of components are required to explicate, which are described in the following sections together with mathematical algorithms derivation.

3.2. The Components and Algorithms of PMVSC

Supposing there are UAVs to perform the attack task. For the th UAV in group, the image perceived by the camera is , where is the position in direction of x- and y-axis in the perceived image. In the perceived module, the colorful image should be preprocessed to make it more convenient for subsequent processing.

The original image from camera is in the red, green, and blue model (RGB-model); each color appears in the primary spectral components of red, green, and blue color. The model is based on the Cartesian coordinate system. The RGB-model has advantages in observation and application. However, as pointed out by Ali et al. [18], the RGB-model has two inferiors compared to the hue, saturation, and illumination model (HSI-model): (a) the three components are used to describe the image together, resulting in a lot of unnecessary information among the components which will increase the calculation. (b) The change of Euclidean distance between point and point in RGB space is not proportionate to the change of actual color. When color separation is carried out, it is easy to make false separation, omit useful information, or mix useless information with useful information.

Figure 3 shows the HSI cylindrical color space model, where , , and represent the value of hue, saturation, and illumination, respectively:where , , and represent the normalized value of red, green, and blue color in the image.

The perceive module functions on converting RGB to HSI. In HSI-model, the image features are obvious in its space. After converting RGB space to HSI space, the connection of each information structure is more compact, each component is more independent of each other, and the loss of color information is less, which lays a good foundation for segmentation and target recognition.

After transfer RGB to HSI, the information of should be uploaded to the vision sharing module, which purposes on information normalization and image invariance, which is shown in Figure 4.

In image processing, the moment invariant feature can reflect shape information of the image, and it has the ability of translation invariance and scalability invariance [19]. For an obtained image , define its -order origin moment as follows:where and represents the maximum row and column scale of image and is the position in direction of x- and y-axis in , .

However, the origin moment responds to changes in . To achieve the invariance of translation and scalability, the is improved to -order central moment:where and represent the centroid position of the image, and they can be calculated by the following equation:

Because the can only keep the translation invariance, so normalized central moment is defined to obtain the ability of scalability invariance:

In a cognize module, as shown in Figure 5 , the main functions are rotation invariance, image mosaic, and targets classification.

From [20], we can infer that the rotation invariance can be obtained by the set of equations (8) based on normalized central distance:

To the images and , as shown in Figure 6, the crucial procedure of image mosaic is to find the most similar region in both and to montage the two images based on the common region. Supposing the test region is a square with length , the similarity between two images is defined as . Then, the image mosaic can be fulfilled by calculating the minimum value:

Once the image is mosaicked ready, all the detected targets are combined in a whole image . Then, the targets should be classified to find out the true targets to attack. For the accurate recognition of multitargets, feature extraction and feature classification are the key issues. True and false targets are very similar, and even the distortion of real targets in the process of recognition will lead to recognition errors. A cognitive-based intelligent recognition method is used in this paper to classify target features with similarity constraints to achieve high accuracy of recognition.

Assume there are targets in . For the th and th targets and , a matrix feature space can be introduced to express the similarity between and :

Then, the problem of feature classification and recognition for true and false targets can be transformed into the problem of similarity constraints on feature vector . To classify the targets with similar features, that is to minimize the similarity of the same kind of targets equation (11) shows

Since equation (11) is an optimization problem of matrices, it is necessary to transform it into a singular value matrix in order to obtain the optimal solution. Let the singular values of matrix bewhere P is the transformation matrix and is the singular value matrix of .

Assuming that is a diagonal matrix composed of the first singular values of matrix and is a left singular value vector corresponding to , there is a definite solution of :

For any orthogonal matrix , to verify that is still the solution of problem. Therefore, the problem of the original objective function can be rewritten as follows:

If is used as the input layer and as the output layer of the network, the problem can be used as the deep confidence network model to solve, which is similar to the energy function of deep confidence networks [21]. Take the of each network input layer as a visible variable and as a hidden variable, and the energy function can be defined by using the Gauss-constrained Boltzmann machine model as equation (15) in order to classify the feature data reasonably:where are the model parameters and and represent the number of visible and hidden units in the network, respectively.

By defining the range of value for model similarity constraint parameter , the similarity of data eigenvalue classification can be changed. That is to say, it achieves the cognitive recognition characteristics of similar targets, and finally it can distinguish true targets and false targets from , which is shown in Figure 7. For a target , the feature point is . Supposing the feature points of samples are and . From equation (12), if , belongs to the false target, or it is true target.

In the last process, the true target assignment of Co-UAVs is studied. Although many intelligent methods have been used to study multiagent cooperative problems [2226], especially for the unpredictable results of each UAV’s behavior in target assignment, it will affect the implementation of all strategies of subsequent UAVs. However, these methods are too subjective, and they are highly coupled with real-time tasks allocation process. It is necessary to introduce some more objective and dynamic methods for targets assignment. In this paper, the Bayesian network is introduced into UAV target assignment task modeling to solve the dynamic adjustment and real-time strategy in target assignment.

The Bayesian network is a directed acyclic graph with probability annotations, which can be used to reveal learning and statistical inference functions for prediction, causal analysis, etc. For multiple UAVs’ target assignment task in this paper, its Bayesian network can be expressed as follows:where is a directed acyclic graph, is a member of Co-UAVs participating in the mission, is a set of arcs of graph , and is a probability annotation of graph , which is shown in Figure 8.

For any UAV member , each element in represents the conditional probability density of the target node. The rule of probability density is as follows:where the calculation of probabilistic needs probabilistic values, and the amount of calculation is very huge.

Therefore, the introduction of variable independence hypothesis in Bayesian networks can greatly reduce the prior probability of the definition of Bayesian networks. For the probability density rule constructed in this paper, we can find a minimum subset , for any target task node in the network structure, which is not independent of condition:where is the parent node set of in graph . In this way, the probability distribution of mission node allocated to UAV can be determined uniquely:

Finally, the else true targets can be assigned by the other Co-UAVs, such as UAV member :

4. Experiments

4.1. The Experimental Platform of UAVs

The experiments have been conducted on the Co-UAVs with adaptive camera, flight controller, algorithmic solver, and data transmitter. Figure 9 shows a single UAV platform, which perceives outside environment from its onboard adaptive resolution camera embedded on the 3-DOF pan-tilt platform, inside real-time flight status from inner sensors integrated in the PIX-4 flight controller. Then, the information of outside environment and inside status are transmitted to airborne Intel computer stick, which functions on algorithm computing, target recognizing, and instruction generating. Finally, the generated instructions are converted to motor commands via the PIX-4 controller.

The Co-UAVs’ experimental platform is shown in Figure 10. The mobile screen can read all data from onboard Intel computer stick and change the algorithm parameters. The ground station functions on obtaining real-time information from flying UAVs, including image features, flight status, and cooperative information. After calculation, the ground station sends control commands to each UAV.

4.2. Image Mosaic

Figures 11(a)11(d) show 4 images captured by cameras onboard by Co-UAVs, which are transferred to clouds (ground station).

The combined image of the whole environment can be obtained by applying the image mosaic algorithm proposed in this paper. Define the test region is a square with length and the threshold value of image mosaic is 0.85. The result of is shown in Figure 12.

In Figure 12, the four images are montaged together. Also, the information of the whole environment can be obtained through the image mosaic. Result shows the proposed method can find similar region between images, and montage the four images based on the common region, indicating the superiority and feasibility of PMVSC.

4.3. Targets’ Recognition

Setting and , that is, there are three true targets and three false targets in the targets’ area and three UAVs are involved in the search and attack mission. Set model parameters as unit matrix and , , and as related constraints for feature constraints of targets.

The standard true and false targets used for training are shown in Figure 13, and the test results of each UAV in actual flight are shown in Figure 14. Even if the target has a large distortion (such as dust cover, edge deformation, and random influence of the direction of true or false identification), the proposed method can extract feature points to calculate similarity and classify them and recognize them accurately.

In order to describe the target recognition process of this algorithm, a real target (Figure 14(a)) and a false target (Figure 14(f)) are selected to elaborate, and the pictures in the process of processing are presented as shown in Figures 15 and 16, respectively.

The airborne computer first transforms the acquired image into HSI color space, which can be recognized by the machine. After eliminating the useless information, it extracts the eigenvalues of the transformed image. However, in the original eigenvalue space, the eigenvalues are almost full of the whole eigenvalue space, so it is impossible to classify the features to distinguish the target type. Therefore, according to the algorithm constructed in this paper, the feature space is transformed. In the transformed feature space, the eigenvalues have obvious distribution characteristics and can be directly classified. Figures 15(d) and 16(d) are identified as different types of target categories, where Figure 15(d) belongs to T target and Figure 15(d) belongs to F target. Finally, the true and false targets are identified in the new feature space and the task of accurate multitarget group recognition is completed.

4.4. Targets’ Assignment

Assuming the UAV has detected all the true targets, it is necessary to assign the task of each UAV so that the Co-UAVs can cooperate to fulfill the task with the minimum cost. For the lth member of Co-UAVs , the probability of attacking the true target is

Figure 17 is a picture of cooperative attack of multiple UAVs over the targets. Three UAVs will attack its corresponding targets, and their attack probability to respective targets is 0.83, 0.82, and 0.86. All are labelled in Figure 11, and the optimal task allocation decision among all UAVs can be obtained by choosing the maximal probability value. Based on the UAV experimental platform, the relevant target assignment algorithms in this paper are tested. Not limited to the derivation of theoretical simulation, this paper applies the algorithm to practice and completely reproduces the feasibility of the algorithm from the actual Co-UAVs’ platform.

4.5. Experimental Results on Co-UAVs

In order to verify the effectiveness and feasibility of the proposed mechanism, PMVSC is tested in a real environment. In the experiment, 3 Co-UAVs were used to cluster, search, identify, and locate the true and false targets (circular target, diameter 7 m, and target recognition area 2 m) in the target area and then attack the target. The area is about 1000 m × 250  m, and the flight area includes the take-off and landing area (the rectangular area of the take-off and landing area is 100 m × 50  m) and the target area (the rectangular area of 200 m × 300  m). Six targets were set in the target area. During each attach task, three targets are randomly selected and placed a white sign “T” in the target center to represent the true target. Similarly, the other three targets use “F” sign to represent the false target.

The schematic illustration of the actual experimental environment is shown in Figure 18 (the experimental area is the irregular area shown in the figure due to the limitation of the actual environment), which contains hidden targets (grey), real targets, and false targets (red).

Figure 19 shows a practical area of three Co-UAVs in the aerial above targets’ environment. There are multiple targets needing recognition. Each UAV can perceive outside world from onboard camera, and the perceived information is transferred to clouds (which is shown in Figure 18, the green area) to merge independent and partial images to a whole image and to distinguish the true targets. Finally, the true targets are assigned to respective UAVs to attack, which is shown in Figures 20(a) and 20(b). In Figure 20(a), the armed UAV (which carries a white sandbag as ammunition) gets attack command, and then flies to the assigned target. Also, Figure 20(b) is the result after attack, from which we can see the target is attacked precisely, indicating the feasibility and validity of the proposed method based on Co-UAVs.

5. Conclusions

Aiming at the problem of attacking multitargets, this paper proposes a strategy of multi-UAVs’ precise target recognition, attack, and task assignment based on PMVSC. Following are some concluding remarks.(a)A humanoid mechanism and algorithm is built referring to humans’ vision sharing model. The proposed PMVSC not only performs well in simulations but also in experimental practices.(b)A UAV platform is built, which consists of onboard camera, 3-DOF pan-tilt, PIX flight controller, and computer stick. The Co-UAVs are based on multiple UAVs and ground station. All the proposed algorithms (including vision sharing, target recognition, and target assignment) are tested on Co-UAVs to confirm the proposed method is practically feasible.(c)The proposed constructive mechanism is expected to shed new insight on our understanding of human vision sharing, which can directly reflect in the design of human-like algorithms.

Still, there are several issues in need of further study.(a)The cooperation among dozens of UAVs: though the cooperation and formation of UAVs have been studied, the proposed method is applied in only three UAVs; thus, how to make it general and be possible implemented in more UAVs is an important work.(b)Moving targets attack: in this paper, the targets are placed on ground, which means they are static. Compared with moving targets, static targets are much harder to attack. Research on dynamic targets needs further study.

Data Availability

All data generated or analyzed during this study are included within the manuscript. Besides, all data included in this study are available upon request by contacting the corresponding author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 61573373.