Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 7372013, 8 pages
https://doi.org/10.1155/2017/7372013
Research Article

Sensor Network Disposition Facing the Task of Multisensor Cross Cueing

Air and Missile Defense College, Air Force Engineering University, Xi’an 710051, China

Correspondence should be addressed to Ce Pang; moc.361@69974429381

Received 2 December 2016; Accepted 18 May 2017; Published 28 June 2017

Academic Editor: Federica Caselli

Copyright © 2017 Ce Pang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In order to build the sensor network facing the task of multisensor crossing cueing, the requirements of initiating cueing and being cued are analyzed. Probability theory is used when building models, then probability of sensor cueing in the case of target moving is given, and, after that, the best distance between two sensors is calculated. The operational environment is described by normal distribution function. In the process of distributing sensor network, their elements, operational environment demand of cueing, and the probability of sensor network coverage are considered; then the optimization algorithm of sensor network based on hypothesis testing theory is made. The simulation result indicates that the algorithm can make sensor network which is required. On the basis of that, the two cases, including targets that make linear motion and orbit motion, are used to test the performance of the sensor network, which show that the sensor network can make uninterrupted detection on targets through multisensor cross cuing.

1. Introduction

Multisensor cross cueing technology is put forward and developed to solve the problem of dynamic management of sensors, meaning that when sensors carry out tasks together, they can cue with each other, and the cued sensors can get the information about targets directly to get further information about targets [13]. Multisensor cross cueing can greatly improve the detecting ability of sensors in the intelligence monitoring and scouting system, and it plays a role mainly in two aspects: on one hand, when sensors break down or they cannot detect targets any more, sensors can cue other sensors to hand over target-detecting tasks, by which targets can be detected successively [4]; on the other hand, sensor can cue other sensors to detect targets together so that the detection accuracy can be improved or extra information can be gotten [5].

Multisensor cross cueing technology is regarded as the important development trend in the sensor field up to 2015 in America [6]. After making the definition, the function, and the method of multisensor cross cueing [4, 7, 8] clear, the technology enters the application phase. McCarthy Panel [9] uses multisensor cross cueing technology to simulate and verify the scene where SAR and MTI detect the whole target area together. Katsilieris and Narykov make study on the functions of cueing between different kinds of sensors and put forward the sensor-managing methods in the condition of cueing [10, 11]. Salvagnini and Muratore discuss sensor cueing opportunity and methods when there are different kinds of sensors in multiple platforms [12, 13]. Pang et al. make a study on the problem of how the multisensor cross cueing technology is applied on target tracking [14].

It is known that there is no introduction on the problem of sensor network deployment facing multisensor cross cueing in the former study, but multisensor cross cueing takes place in the sensor network, requiring good connection of the sensor network to satisfy the need of communication. Whether the performance of the sensor network is well or not can influence the effect of multisensor cross cueing.

In this paper, sensors are radars, and the sensor network is built facing multisensor cross cueing. Models are made based on probability theory, and the probability of multisensor cross cueing is defined. The best distance between two sensors is calculated, and the optimization algorithm based on hypothesis test to optimize the sensor network is put forward. In the simulation, the process of building the sensor network and the process of multisensor cross cueing are shown.

The probability of multisensor cross cueing models is built based on probability theory, with the best distance between two sensors calculated. The combat environment is described using normal distribution function. After this work, the optimization algorithm based on hypothesis test to optimize the sensor network is put forward. The simulation indicates that the method the paper introduces can get the sensor network to satisfy the need of multisensor cross cueing. What is more, the process of multisensor cross cueing is shown, which indicates that the multisensor cross cueing technology can adapt to the combat reality, making targets detected successively and having a good application prospect.

2. Basic Theory of Sensor Cueing

2.1. Inspiring Conditions of Cueing between Two Sensors

Assume that the distance between sensor and sensor is , the detecting radius and the communicating radius of are and , and the detecting radius and the communicating radius of are and . The sufficient condition of cross cueing between and is

In this paper, the cueing way between two sensors is “beam cueing,” and the effective cueing is defined as follows: if cross cueing can happen between two sensors and the cued sensor can catch the target, the cueing is effective cueing, or the cueing is invalid cueing. The condition of effective cueing is

The happening condition schematic diagram of cross cueing between and is as Figure 1 describes.

Figure 1: The relationship of two sensors when effective cueing happens.
2.2. The Cueing Model between Sensors Based on Probability Theory

Assume that is the cueing sensor and is the cued sensor. In order to simplify the model, in this paper, . Assume that when the target is in the area of and “” stands for the event that cues , in terms of probability theory, then probability of occurrence of is as follows:In the formula, stands for the detecting area of and stands for the detecting area of .

After geometric calculation, we can get

Regard as coordinate and as coordinate, and then the relationship of and is as shown in Figure 2.

Figure 2: The Curve of Function .
2.3. The Multisensor Cross Cueing Model When Targets Move

For the reason that targets move fast when coming, this paper pays attention to the problem of multisensor cross cueing when targets move fast. If a target is flying away from the detecting area of , needs to cue other sensors to hand over the task of detecting this target. If is determined to accept the task, after start to detect the target, will not detect the target any more.

Firstly, the cueing time is defined as follows.

Assume that at the time of , the distance of one target and is , and the distance of this target and is . Assume that, at the time of , the distance of one target and is , and the distance of this target and is . When is detecting the target, if and , will produce the need of cueing, and then it will cue other sensors to take over the detecting task. When and the target is in the area of , has the ability to take over the detecting task, and it can be cued by .

Assume that “” sands for the event that catch the target successfully. As is carried out after , in terms of probability theory, the probability of is as follows:

For , we can getRegard as coordinate and as coordinate, and then the relationship of and is as shown in Figure 3.

Figure 3: The Curve of Function .

In Figure 3, we can get and and when .

When needs to cue other sensors, if there are several sensors which can be cued, chose the cued sensor based on the probability .

In addition, the detecting radius is usually smaller than the communicating radius [15]; that is to say, and .

What is more, ignore the assumption , and then there is

It can be seen that and are three-dimensional functions of , , and , and when , the functions are as Figures 4 and 5 describe.

Figure 4: The Curve of Function .
Figure 5: The Curve of Function .

3. Build the Sensor Network

3.1. Combat Environment Scenarios

Assume that, in the combat environment, the coordinate of our position is , the coordinate of the enemy is , and there are sensors which need to be deployed to detect targets of the enemy. The distribution of our position and the position of the enemy is described in Figure 6, and the shadow area is what we should watch to detect targets from the enemy.

Figure 6: The position relationship of the enemy and our side.

Based on the rule that targets should be detected as soon as possible, sensors should be deployed near the enemy, but sensors are so near the enemy that they can be destroyed by the weapons form the enemy. According to the rule of “” [16], assume that, in the rectangular area , the strategic value of each position obeys the regular as follows:

3.2. The Building Rules of Sensor Network

Building the sensor network, which is the basis for multisensor cross cueing, makes a direct impact on the performance of multisensor cross cueing. In order to satisfy the need of multisensor cross cueing, when building the sensor network, the total number of degrees [17] in the sensor network should remain pretty large, making good connectivity between every sensor.

When deploying sensors in the area , rules should be obeyed as follows:(1)Sensors tend to be deployed where there is high strategic value.(2)In order to satisfy the need of multisensor cross cueing, the distance of each two sensors tends to be .(3)The coverage of the sensor network should remain large.

Rule one is designed to meet the combat environment in this paper; rule two is designed to satisfy the need of multisensor cross cueing; rule three is designed due to the rule in the previous papers.

3.3. The Steps of Building Sensor Network

To Build the Earning Degree Function. Assume , , , and the position of is . There are sensors which can communicate directly with , and the set is , in which is the number . The probability of cross cueing between and is . Assume that, in the position , the strategic value is , and then the earning degree of can be calculated as follows:

In the function, , which is the weight, and is the area which the sensor network covers.

In this paper, is called strategic earning, is called cueing earning, is called coverage earning, and is called the total earning.

To Initialize the Positions of Sensors. At the time of , the grid method [17] is used to initialize the positions of sensors, the area is divided into grids, and, at each vertex, there is one sensor.

To Optimize the Positions of Sensors. Assume, at the time of , the vector, whose direction is from to , is . can move through the direction or , and the speed is , which is calculated as follows:

In the function, is a constant value.

Move through the direction , then the strategic earning of can increase, and the coverage earning can decrease; move through the direction , then the strategic earning of can decrease, and the coverage earning can increase (in the case where each two sensors have common detecting area).

According to the statistical detection theory, assume that the case is representative for the total earning increase if one sensor is moving through the direction , and the case is representative for the total earning increase if one sensor is moving through the direction . Assume that, at the time , the probability of moving through the direction is , and the probability of the increase of the total earning is ; assume that the probability of moving through the direction is and the probability of the increase of the total earning is . After moving, the total earning of is .

() When , update functions as follows.

At the time of , the probability of the increase of the total earning after moving is

At the time of ,

The following is to prove .

② The function is obviously right.

That is to say, if the move at the time of can prove that is right, then, at the time of , will increase, and will be the lager number.

() When , , update functions as follows.

At the time of , the probability of the increase of the total earning after moving is

At the time of ,

The following is to prove .

② The function is obviously right.

That is to say, if the move at the time of cannot prove that is right, then, at the time of , will decrease, and will be the smaller number.

To Build the Sensor Network. At the time of , positions of sensors will not change any more and then connect sensors which can communicate with each other to make edges and calculate the weight of each edge according to function (4) and function (9), and, finally, a chart can be made, which is the sensor network.

4. The Simulation and the Analysis

In the process of simulation, parameters are as follows: , , , , , , , , and . In the beginning, sensors are deployed according to grid method [18], and the distribution is as Figure 7 describes. The red circles are sensors, and if there is a line between two sensors, it means cross cueing can happen between the two sensors.

Figure 7: Positions of sensors at the beginning of iteration.

In the simulation, the value of , can exert direct influence on the optimized result of the sensor network. If and , the most optimal result is when all sensors are at the position of ; if and , the most optimal result is when each distance of two sensors is ; if , the most optimal result is when there is no common area between each two sensors.

At the time of , the sum of degrees of all vertexes is , and the average distance of all vertexes is .

4.1. The Optimization of the Sensor Network

The Optimization of the Sensor Network When , , and . In this case, the coverage earning is ignored, and the iteration process is as Figure 8 describes.

Figure 8: The iteration process of the optimization using the method in this paper.

The optimal result is as Figure 9 describes.

Figure 9: The optimized result of sensor network.

In Figure 9, the sum of degrees of all vertexes is , and the average distance of all vertexes is . It can be known that when sensor stops moving, the distance of two sensors is near .

From Figure 9, it can be known that when ignoring the coverage earning, all sensors tend to move to the position of . The detecting area of the sensor network will decrease, and the reason is mainly as follows: the first one is the highest strategic earning at the position of attracts every sensor to move to itself; the other one is that sensors move close to try to get more cueing earning. Although all sensors tend to move to the position of , there is some distance between each two sensors, and the reason is as follows: each sensor tries to adjust the distance with others to under rule two.

The Optimization of the Sensor Network When , , and . In this case, the coverage earning is considered, and the iteration process is as Figure 10 describes.

Figure 10: The iteration process of the optimization using the method in this paper.

The optimal result is as Figure 11 describes.

Figure 11: The optimized result of sensor network.

In Figure 11, the sum of degrees of all vertexes is , and the average distance of all vertexes is .

Compared to Figure 9, the sensor network can get lager coverage area in Figure 11, it can satisfy the need of multisensor cross cueing, and there are a number of sensors at the position of at the same time, which correspond to the combat reality.

In this paper, multisensor cross cueing is analyzed based on the sensor network of Figure 11. In Figure 11, coordinates of all sensors is as Table 1 describes, and the weight of each edge is as function (17) describes.

Table 1: Coordinates of sensors.
4.2. The Process of Multisensor Cross Cueing

The Process of Multisensor Cross Cueing at the Case When the Target Makes Linear Motion. Assume that the target is a UCAV, at the time of , the target takes off from the position of the enemy , and the speed is . The target moves to through the direction of , and then it moves back to through . At the time of , if , the coordinate of the target is , and if , the coordinate of the target is .

The process of multisensor cross cueing is as Figure 12 describes. The horizontal coordinate is the time , and the vertical coordinate is the sensor which is responsible for the detection to the target.

Figure 12: Procession of multisensor cross cueing when the UCAV is coming.

The Process of Multisensor Cross Cueing at the Case When the Target Makes Sinusoidal Motion. Assume that the target is a UCAV and, at the time of , the target takes off from the position of the enemy , making sinusoidal motion. The flight path is , and, at the time of , the coordinate of the target is .

The process of multisensor cross cueing is as Figure 13 describes. The horizontal coordinate is the time , and the vertical coordinate is the sensor which is responsible for the detection to the target.

Figure 13: Procession of multisensor cross cueing when the UCAV is coming.

The two simulations can illustrate that the sensor network can make multisensor cross cueing smooth and successive detection to the target.

5. Conclusion

In this paper, the probability of multisensor cross cueing model is built based on probability theory, with the best distance between two sensors calculated. The combat environment is described using normal distribution function. After this work, the optimization algorithm based on hypothesis test to optimize the sensor network is put forward. The simulation indicates that the method the paper introduces can get the sensor network to satisfy the need of multisensor cross cueing. What is more, the process of multisensor cross cueing is shown, which indicates that the multisensor cross cueing technology can adapt to the combat reality, making targets detected successively and having a good application prospect.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by National Natural Science Fund of China under Grant no. 61573374.

References

  1. G. W. Ng, K. H. Ng, and L. T. Wong, “Sensor management - control and cue,” in Proceedings of the 3rd International Conference on Information Fusion, FUSION 2000, Paris, France, 2000. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. He, X. Guan, and G. Wang, “Survey on the progress and prospect of multisensor information fusion,” Journal of Astronautics, vol. 26, no. 4, pp. 524–530, 2005. View at Google Scholar · View at Scopus
  3. H. Fan, Z. Long, S. Huang, Y. Li, and M. Gao, “Development and study of multi-sensor cross cueing technology,” Aerodynamic Missile Journal, vol. 2, pp. 79–84, 2012. View at Google Scholar
  4. M. Avalle, “Studies and simulations on sensor fusion and cueing for fighter application,” in Advisory Group for Aerospace Research and Development, North Atlantic Treaty Organization, pp. 224–229, Cananda Communication Group, Cananda, 1996, Section IV. View at Google Scholar
  5. G. C. Crystal, “An Approach to Realizing the Potential of Information Operations,” IEEE Transactions on BAE, vol. 3, pp. 33–40, 2001. View at Google Scholar
  6. H. Zhaochao and S. Huan, “Analysis of multi-sensors detecting and tracking cm based on near-space,” Aero Weaponry, vol. 2, pp. 3–7, 2010. View at Google Scholar
  7. T. Peli, M. Young, R. Knox et al., “Feature level sensor fusion,” XDDARPA, Atlantic Aerospace Electronics Corporation, 1999. View at Publisher · View at Google Scholar
  8. D. Strǒmberg, “A sensor-independent sensor-oriented tracking architecture,” in Proceedings of the IEEE Information Decision and Control Conference, pp. 111–116, Adelaide, SA, Australia, 1999. View at Publisher · View at Google Scholar
  9. L. Bush, “Semi-automated cueing of predator UAV operators from radar moving target (MTI) data,” MIT Lincoln Laboratory, Intelligence Surveillance Reconnaissance (ISR) Systems Group, Integrated Sensing and Decision Support (ISDS) Laboratory, Jan. 24, 2006.
  10. F. Katsilieris, “Sensor management for surveillance and tracking: an operation perspective,” Delft: Delft University of Technology, 2015. View at Google Scholar
  11. A. S. Narykov, O. A. Krasnov, and A. Yarovoy, “Effectiveness-based radar resource management for target tracking,” in Proceedings of the International Radar Conference, IEEE, Lille, France, October 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. P. Salvagnini, F. Pernici, M. Cristani et al., “Non-myopic information theoretic sensor management of a single pantiltzoom camera for multiple object detection and tracking,” Computer Vision and Image Understanding, vol. 134, pp. 74–88, 2015. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Muratore, R. T. Silvestrini, and T. H. Chung, “Simulation analysis of UAV and ground teams for surveillance and interdiction,” Journal of Defense Modeling and Simulation: Applications, Methodology, Technology, vol. 11, no. 2, pp. 125–135, 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. C. Pang, S. Huang, and J. Liu, “Multi-sensor cross cueing technology and its application in target tracking,” Chinese Journal of Astronautics, vol. 38, no. 4, pp. 401–409, 2017. View at Google Scholar
  15. R. A. Hooshmand and S. Soltani, “Fuzzy optimal phase balancing of radial and meshed distribution networks using BF-PSO algorithm,” IEEE Transactions on Power Systems, vol. 27, no. 1, pp. 47–57, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. Z. Lu, M. Li, and X. Ji, “Research on radar searching volume based on mutil-sensor cooperation technology,” Aeronautical Computing Technique, vol. 16, no. 3, pp. 30–35, 2015. View at Google Scholar
  17. J. Lin, “Network analysis of China's aviation system, statistical and spatial structure,” Journal of Transport Geography, vol. 22, pp. 109–117, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. Y. Zou and K. Chakrabarty, “Sensor deployment and target localization in distributed sensor networks,” ACM Transactions on Embedded Computing Systems, vol. 3, no. 1, pp. 61–91, 2004. View at Publisher · View at Google Scholar