Research Article  Open Access
An Efficient Plot Fusion Method for High Resolution Radar Based on Contour Tracking Algorithm
Abstract
With the development of radar system, the problem of enormous raw data has drawn much attention. A plot fusion method based on contour tracking algorithm is proposed to detect extended targets in a radar image. Firstly, the characteristic of radar image in complex environment is revealed. Then, the steps of traditional method, region growing method, and the proposed method are introduced. Meanwhile, the algorithm of tracking the contour of an extended target is illustrated in detail. It is not necessary to scan all the plots in the image, because the size of target is considered in the proposed method. Therefore, the proposed method is much more efficient than several existing methods. Lastly, the performance of several methods is tested using the raw data of two scenarios in real world. The experiment results show that the proposed method is practical and most likely to satisfy the realtime requirement in various complex environment.
1. Introduction
The raw data of high resolution radars is enormous in real world; therefore, it is hard to track and display targets in real time. The low resolution of former radars made the target appear in one single resolution cell; however, high resolution radars allow the target to be found in several cells, providing what is called an “extended target” or “extended object” [1]. Conventional detection algorithms detect targets in one single resolution cell, providing what is called a “plot” here; however, they are no longer suitable for many scenarios. In this case, an object should be regarded as extended if the target extent is larger than the radar resolution [2]. Therefore, a general approach based on support functions in [3] is used to model smooth shapes of objects. The raw data of high resolution radar is enormous; meanwhile, only a few extended targets exist in the surveillance area. Therefore, raw data fusion of high resolution radar is needed for tracking and displaying targets in real time. Plot fusion can be divided into four steps: measurement segmentation, plots partition, extended target identification, and centroid extraction.
The first step, measurement segmentation, means finding out the plots whose amplitude has potentially been affected by targets. The algorithms of image segmentation in [4, 5] are available in this step. Secondly, in plots partition, all the plots whose amplitude is affected by the same target are regarded as a cluster [6]. Classify all the occupied plots into clusters; each cluster potentially denotes an extended target. A fast clustering algorithm SDDC based on the single dimensional distance calculation is proposed in [6]. By comparison, SDDC is faster than Hierarchy and DBSCAN method [7]. In [8–10], the plots of extended targets are partitioned by the famous kmeans and kmeans++ clustering algorithm in [11], respectively. An HK clustering algorithm [12] and an improved HK clustering algorithm based on ensemble learning [13] are also proposed for this problem. The third step, extended target identification, judging the plots of a cluster is caused by an extended target or strong clutter. And the methods proposed in [14, 15] are used to distinguish ships from strong clutter. As to the last step, centroid extraction, an improved centroid extraction algorithm for autonomous star sensor is proposed in [16]; its computational complexity is lower than the algorithm proposed in [17].
With the increased resolution of modern radars, the raw data increases significantly. And the raw data of one frame must be processed within a radar scanning cycle. Therefore, the complexity and calculation of the method are significant. Most calculations are spent on the second step, obtaining the clusters where each cluster is the set of plots occupied by one extended target. An improved centroid extraction algorithm for autonomous star sensor is proposed in [16], which has a lower computational complexity compared with the centroid extraction algorithm proposed in [17]. And the entire stellar image is necessary in both methods. Traditional method has been widely used in real world and its theory is mentioned in [18, 19]. Unfortunately, in the traditional method, the entire image is scanned two times. Meanwhile, a considerable amount of extended targets in different size exists in the huge surveillance area. And much calculation is spent in scanning the entire image. However, extended targets in an image are detected efficiently by means of the region growing methods in [20–22]. To solve the problem, a plot fusion method based on contour tracking algorithm is proposed. As an important method in image processing, contour tracking algorithm is introduced in [23], and some improvements are made in [24, 25]. In [24], the region features of targets and background are considered. However, these methods [23–25] are unsuitable in this work, due to the fact that, compared with the entire image, only a few plots are occupied by targets. The proposed method can find the plots belonging to the contour of an extended target efficiently and then calculate the accurate location of the target with its contour. And only part of the image is scanned with the proposed method and region growing methods in [20–22], but the proposed method is more efficient.
For the presence of nonintentional interference (sea clutter, littoral clutter, and ground clutter) and intentional interference (jammers), false alarm exists in the image. Therefore, extended target are detected in complex environment. Relative works have been done in [26–28]. In [26], the surveillance region is divided into LC (low clutter), MC (medium clutter), and HC (high clutter) zones, and the clutter density is different in different zone. To compare and analyze the performance of the methods in detail, three zones are tested with five methods, centroid extraction algorithm in [16], SDDC algorithm in [6], traditional method, region growing [20] based method, and lastly the proposed method, respectively. The proposed method is based on contour tracking algorithm in [23].
The remainder of the work is organized as follows. In Section 2, target model and measurement model are presented. Then, in Section 3, the steps of three methods are introduced, that is, traditional method, region growing method, and the proposed method, providing what is called “contour tracking method.” After that, the simulation and the performance of five methods are presented in Section 4. Finally, a simple conclusion is given in Section 5.
2. Models and Notations
2.1. Target Model
We assume that there are targets existing in the surveillance area. The states of each target are needed in target tracking and display. And the state of th target is defined as a sequence, , where denotes the location of th target, and denote the length and width of the target detected in the image, denotes the angle between initial bearing and course of th target, and the last parameter denotes the amplitude of th target, and it is determined by its length and width in real world and the radar cross section (RCS) of th target.
2.2. Measurement Model
The surveillance area is divided into plots, where and denote the number of plots in bearing and range axis, respectively. One frame of measurements is defined as , , , , where denotes the amplitude of plot . The target is detected once it is scanned by radar’s beam, and once a part of a target is scanned by the beam, the amplitude of this plot is affected by the target. The situation is revealed in Figure 1.
The parameter denotes the beam width of radar, and denote the length and width of the target in real world, and denotes the distance between the target and radar. As is shown in Figure 1, once the direction of the beam equals , , the th target is illuminated by the beam. The scope of is calculated byAnd length and width of target in bearing and range axes are calculated by (2); meanwhile, in high resolution radars, and are much larger than former radars. Therefore, many plots are occupied by one target. Hence,where CR denotes coverage range of the radar. Take a point target into consideration; that is, , . Put , into (2); then and are obtained by
The length and width of a target are larger than and , respectively. Therefore, a target can be discovered in several adjacent plots. Then, we suppose the target shape is an ellipse; we assume that the amplitude of the plots in the ellipse can be calculated bywhere we have definedwhere and denote the amplitude of th target and clutter, respectively. Indicator function is used to indicate whether the plot is occupied by th target. We formally say that th target occupies the plot if the difference between the th target’s centroid and the plot location , rotated by the target’s course and normalized with , is lower than unity [1], where and represent the speed of the target. The complex noises have a Gaussian distribution with zero mean and are assumed to be independent and identically distributed (IID). denotes the amplitude of clutter in plot .
2.3. Distribution Characters of Targets
A radar measurement can be produced either from a target (ship, aircraft, and helicopter) or from clutter. In this work, in a shorebased radar operating in a complex environment, clutter echoes are mainly due to mountains, shores, buildings, and islands but also vehicles on roads, highways, and railways. Hence, it is convenient to divide the surveillance region into zones of three types according to their clutter density, namely, low clutter (LC) zones, medium clutter (MC) zones, and high clutter (HC) zones [26]. Meanwhile, surveillance region can be divided into three areas according to the distance of the radar, shore area, nearshore area, and offshore area. Two scenarios in real world are illustrated in Figure 2. And the surveillance region in Figure 2 consists of plots.
(a)
(b)
Three areas are revealed in Figure 2, where black points denote something is detected in this plot. As is revealed in Figure 2(a), the shore area equals HC zones. Shores are scanned by beams; therefore, almost all the plots in HC zones are occupied. In Figures 2(a) and 2(b), a lot of targets are distributed in nearshore area and only a few targets are distributed in offshore area. Meanwhile, nearshore area is in medium clutter and offshore area is in low clutter.
3. Contour Tracking Method and Two Contrast Methods
The steps of three methods, traditional method, region growing method, and contour tracking method, are revealed in this section. And the contour tracking method would be introduced in detail. The input of three methods is a radar image that consists of plots.
3.1. Traditional Method
Step 1 ( criterion). Take plots with the same range into consideration. Once something is detected in plots of plots, we can formally say that the plot is occupied by something. Hence,Once , we can declare that something is detected in the plot . The function in (6) is used to indicate whether the amplitude of the plot exceeds a detected threshold.
Step 2 (fusion in bearing axis). We assume that something is detected in a series of plots, which are successive in bearing; we regard these plots as one element of a set. These elements would undergo further fusion in the next step.
Step 3 (fusion in range axis). Fuse the elements which are adjacent in range, and the result is regarded as a cluster of plots or a point cloud. Then, regard these plots as an uncertain target.
Step 4 (shape criterion). The shape of the point cloud is used to judge whether the point cloud is an extended target; once the shape satisfied the limitation in Section 2.2, all the plots occupied by the extended target are found out and the centroid of the target is calculated via (7). Lastly, estimate the length and the width of the point cloud as the length and the width of the extended target.
Given all the plots occupied by a target, methods in [16–18] can be used to calculate the accurate location of the target. Meanwhile, (7)–(9) are also widely used in centroid extraction. However, to a sea surveillance radar, the realtime request is more important. Therefore, in order to obtain high efficiency, the fusion algorithm in [18] is used; that is, target centroid is calculated by (7), where denotes the detection threshold. Hence,
3.2. Region Growing Method
Step 1. Select the seeds in the image with the minimum length and width of target.
Step 2. Judge whether this seed is occupied; if yes, find out all the adjacent plots whose amplitude exceeds detection threshold with method in [20]; then go to Step ; if not, turn to next seed; repeat this step until all the seeds have been considered.
Step 3. The adjacent plots growing by a seed have been obtained in this step, which estimated the shape of the point cloud. Then, make a shape criterion, just like in traditional method. And calculate the centroid with (7).
3.3. Contour Tracking Method
Contour tracking method in our works is similar to the method in [23], but three improvements have been made. Firstly, select seeds in a frame, instead of scanning all the plots in a frame, and merely scan these seeds and judge whether the seed belongs to the contour of a target; this process is learnt from the region growing algorithm. Secondly, the contour of point target can be detected fine. Lastly, an improved method is proposed to judge whether the contour tracking process is completed. Then, the contour tracking algorithm is introduced below.
3.3.1. Contour Tracking Algorithm
The algorithm’s principle is that once a plot belonging to the contour is detected, then find out next plot which belongs to the contour. We assume that next contour plot belongs to the 8neighborhood (i.e., the adjacent plots in vertical, horizontal, and diagonal directions) of current contour plot. The chain codes used to represent eight neighbors are shown in Figure 3(a). To obtain the next contour plot efficiently, the eight neighbors of current contour plot are scanned in order, and the relationship is shown in Figure 3(a), where black and grey plots denote previous and current contour plot, respectively, and there are eight contour path directions coded as numbers 0–7. In different contour path direction, the neighbor which would be scanned firstly is different.
(a)
(b)
A simple example is revealed in Figure 3(b). If contour plots and are regarded as previous and current contour plot, respectively, the neighbor of plot which would be scanned firstly is ; however, if the plot is not occupied, then turn to next neighbor , and if the plot is not occupied too, then go to next neighbor in a counterclockwise direction. Until the neighbor plot () is scanned, next contour plot is detected; that is, the plot is the next contour plot. Then, regard plots and as previous and current contour plot, respectively. At this time, the neighbor of plot which would be scanned firstly is in direction 2; that is, plot would be scanned firstly and then plot , plot , and so forth. Repeat this process, until all the contour plots are found out. And how to judge whether all the contour plots have been discovered would be discussed below.
3.3.2. An Improved Method for Judging the Contour Tracking Is Completed
An improved method is proposed to judge whether all the contour plots have been discovered. If the amplitude of a plot is larger than the detected threshold, it means that the plot is occupied. If the plot had been regarded as a contour plot, set its amplitude to −1; if the contour plot was the starting plot of this contour, set its amplitude to −2; starting plot means the first plot which has been discovered in this contour. Therefore, all the amplitudes of discovered contour plots are set to less than 0. If next scanned plot is the starting plot and no undiscovered contour plot exists in eight neighbors of starting plot, then we can formally declare that the contour tracking process is completed and all the contour plots have been discovered. If an undiscovered contour plot exists, regard this contour plot as current contour plot, and then continue the method.
3.3.3. Whole System of the Contour Tracking Method
The whole system of the contour tracking method in our works is revealed in Figure 4, and the method includes three steps.
Step 1. If the first frame of image is involved, initialize bearing step of the method. Otherwise, the parameter “bearing step” in Figure 5(a) would have been calculated. Then, the measurement, an image with plots, is obtained. Then, select the seeds in the image.
(a)
(b)
Step 2. Judge whether the seed belongs to the contour of a target. If yes, regard the seed as the starting contour plot, and then get the contour with the algorithm that has been introduced in Section 3.3.2 and go to Step . If not, turn to next seed, and repeat Step . If all the seeds have been scanned, calculate the bearing step in next frame by the size of detected targets.
Step 3. Judge whether the region bounded by the contour meets the shape criterion. If yes, we can declare that a target is detected. And bring the target’s contour plots into (7) for the target’s accurate location. Then, turn to next seed and back to Step .
Lastly, the sketch of whole contour tracking method for targets’ location is revealed in Figure 5. There are two targets existing in Figure 5.
4. Simulation and Results
The performance of five methods has been assessed with real data from a high resolution marine surveillance radar. The parameters of the radar are shown in Table 1.

Two scenarios are tested in this experiment; the raw data of first and second scenarios are revealed in Figures 2(a) and 2(b), respectively. The surveillance area in both scenarios is divided into plots. In first scenario, the surveillance area is divided into nearshore and offshore; meanwhile, the surveillance area of second scenario is divided into three areas, shore, nearshore, and offshore. In different kind of area, the quantities of targets, plots, and occupied plots are different. And the comparison between two scenarios is revealed in Table 2. The plots in offshore are much more than in shore and in nearshore; however, in offshore, only a few targets and occupied plots exist. And the quantity of occupied plots in shore is much more than two other areas’. The quantitative relation between three areas is illustrated in detail in Table 2. In this work, the minimum radar scanning cycle equals 10 seconds, that is, seconds. Only if the elapsed time of a method is less than 10 seconds can the method satisfy the realtime requirement.

Five methods are tested with two scenarios, and the results of methods are revealed in Table 3. The methods are performed on the PowerPC MPC8640D, 1.0 GHz, with 4 GB RAM in Wind River Workbench 3.2 environment. In first scenario, it takes about 7.6 seconds to extract all the occupied plots in entire image; that is, it takes 7.6 seconds to scan all the plots in the image and to store the occupied plots. In region growing method and contour tracking method, not all the plots are scanned; therefore, their elapsed time is much less than other methods’; meanwhile, both methods can satisfy the realtime requirement. In second scenario, due to the presence of shore, the elapsed time of region growing method increases significantly. Therefore, only the elapsed time of contour tracking method is less than radar scanning cycle; that is, only the contour tracking method can satisfy the realtime requirement. The elapsed time of traditional methods is similar in two scenarios, but far more than radar scanning cycle. In this work, means and means++ cannot work here for two reasons. Firstly, the target number is unknown. Secondly, for the presence of noise and enormous false alarm in shore area, numerous initial centers are false alarm. Therefore, plenty of false targets and misdetection would arise. Centroid extraction method in [16] is used to extract the centroid of stars; pixels of stellar image are fewer than those of radar image; meanwhile, stellar image ( usually) is smaller than radar image. From Table 3, we can see that the centroid extraction method in [16] cannot work well if a large image and much more occupied pixels are involved. And a similar problem also appears in the SDDC algorithm in [6]. And what is worth mentioning, in both scenarios, is that the elapsed time of contour tracking method is much less than that of scanning all the plots. However, unfortunately, the true location of target is unavailable; therefore, it is unable to compare centroid estimation errors (CEEs) of different centroid extraction methods. But, in our works, to a sea surveillance radar, the realtime request is more important. Therefore, among five methods, only the proposed method, contour tracking method, could process the enormous data of varied complex environment in real time.
A further analysis is made to explain the results in Table 3. In second scenario, the efficiencies of three methods, region growing method, traditional method, and contour tracking method, are different in different areas. As is revealed in Figure 6(d), in traditional method, most of the time is spent on offshore, because the plots in offshore are much larger than in the other two areas. For traditional method, the elapsed time is most often determined by quantity of plots; therefore, the proportion of elapsed time in three areas is similar to the proportion of plots in three areas. As is revealed in Figure 6(e), in region growing method, almost all the time is spent in shore, because almost all the occupied plots are distributed in shore. From this result, we can know that, for region growing method, the elapsed time is most often determined by quantity of occupied plots. Lastly, as is revealed in Figure 6(f), for contour tracking method, its elapsed time in three areas is much less than those of the other two methods, respectively. This means that the contour tracking method is the most efficient method in every kind of area.
(a)
(b)
(c)
(d)
(e)
(f)
5. Conclusion
In this paper, a plot fusion method based on contour tracking algorithm is proposed to detect extended targets in raw data of high resolution radar. The performance of another five methods is compared with that of the proposed method by two scenarios which come from a high resolution radar in real world. From the results, three points are summed up. Firstly, the contour tracking method is the most efficient method in three kinds of areas. Secondly, if there was no shore area existing in the frame, the region growing method would have a high efficiency; otherwise, the elapsed time of region growing method increases significantly. Thirdly, the methods in [6, 16] cannot work well if a large image and enormous plots are involved. Therefore, the contour tracking method is practical and most likely to satisfy the realtime requirement in complex environment.
Competing Interests
The authors declare that they have no competing interests.
References
 Y. Boers, H. Driessen, J. Torstensson, M. Trieb, R. Karlsson, and F. Gustafsson, “Trackbeforedetect algorithm for tracking extended targets,” IEE Proceedings: Radar, Sonar and Navigation, vol. 153, no. 4, pp. 345–351, 2006. View at: Publisher Site  Google Scholar
 C. Lundquist, U. Orguner, and F. Gustafsson, “Extended target tracking using polynomials with applications to roadmap estimation,” IEEE Transactions on Signal Processing, vol. 59, no. 1, pp. 15–26, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 L. Sun, X. R. Li, and J. Lan, “Modeling of extended objects based on support functions and extended Gaussian images for target tracking,” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, no. 4, pp. 3021–3035, 2014. View at: Publisher Site  Google Scholar
 J. S. Hernandez, E. M. Izquierdo, and A. A. Hidalgo, “Improving parameters selection of a seeded region growing method for multiband image segmentation,” IEEE Latin America Transactions, vol. 13, no. 3, pp. 843–849, 2015. View at: Publisher Site  Google Scholar
 L. Yi, G. Zhang, and Z. Wu, “A scalesynthesis method for high spatial resolution remote sensing image segmentation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 10, pp. 4062–4070, 2012. View at: Publisher Site  Google Scholar
 Z. Li and X. Wang, “High resolution radar data fusion based on clustering algorithm,” in Proceedings of the 2nd International Workshop on Database Technology and Applications (DBTA '10), November 2010. View at: Publisher Site  Google Scholar
 T. Zhang, R. Ramakrishnan, and M. Livny, “BIRCH: a new data clustering algorithm and its applications,” Data Mining and Knowledge Discovery, vol. 1, no. 2, pp. 141–182, 1997. View at: Publisher Site  Google Scholar
 G. Vivone, P. Braca, and B. ErrastiAlcala, “Extended target tracking using joint probabilistic data association filter on Xband radar data,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '15), pp. 5063–5066, Milan, Italy, July 2015. View at: Publisher Site  Google Scholar
 G. Vivone, P. Braca, K. Granström, A. Natale, and J. Chanussot, “Converted measurements random matrix approach to extended target tracking using Xband marine radar data,” in Proceedings of the 18th International Conference on Information Fusion (Fusion '15), Washington, DC, USA, July 2015. View at: Google Scholar
 K. Granström, C. Lundquist, and O. Orguner, “Extended target tracking using a gaussianmixture PHD filter,” IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 4, pp. 3268–3286, 2012. View at: Publisher Site  Google Scholar
 T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu, “An efficient kmeans clustering algorithms: analysis and implementation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 881–892, 2002. View at: Publisher Site  Google Scholar
 Y. Yang, Z. Ma, Y. Yang, F. Nie, and H. T. Shen, “Multitask spectral clustering by exploring intertask correlation,” IEEE Transactions on Cybernetics, vol. 45, no. 5, pp. 1069–1080, 2015. View at: Publisher Site  Google Scholar
 Y. He, J. Wang, L. X. Qin, L. Mei, Y. F. Shang, and W. F. Wang, “A HK clustering algorithm based on ensemble learning,” in Proceedings of the IET International Conference on Smart and Sustainable City (ICSSC '13), pp. 300–305, Shanghai, China, August 2013. View at: Publisher Site  Google Scholar
 Z. Zhao, K. Ji, X. Xing, W. Chen, and H. Zou, “Ship classification with high resolution TerraSARX imagery based on analytic hierarchy process,” International Journal of Antennas and Propagation, vol. 2013, Article ID 698370, 13 pages, 2013. View at: Publisher Site  Google Scholar
 M. KhodjetKesba, K. El Khamlichi Drissi, S. Lee, K. Kerroum, C. Faure, and C. Pasquier, “Comparison of matrix pencil extracted features in time domain and in frequency domain for radar target classification,” International Journal of Antennas and Propagation, vol. 2014, Article ID 930581, 9 pages, 2014, Corrigendum in International Journal of Antennas and Propagation, vol. 2015, Article ID 792451, 1 page, 2015. View at: Publisher Site  Google Scholar
 L. Luo, L. Xu, and H. Zhang, “Improved centroid extraction algorithm for autonomous star sensor,” IET Image Processing, vol. 9, no. 10, pp. 901–907, 2015. View at: Publisher Site  Google Scholar
 M. V. Arbabmir, S. M. Mohammadi, S. Salahshour, and F. Somayehee, “Improving night sky star image processing algorithm for star sensors,” Journal of the Optical Society of America A: Optics and Image Science, and Vision, vol. 31, no. 4, pp. 794–801, 2014. View at: Publisher Site  Google Scholar
 S. Wang and C. Gao, “Plot data processing algorithms and simulation trace,” in Proceedings of the International Conference on Computational Intelligence and Software Engineering (CiSE '09), pp. 1–4, Wuhan, China, December 2009. View at: Publisher Site  Google Scholar
 X. Wang, A. Li, and B. Shan, “A novel data fusion method applied in radar imaging matching position,” in Proceedings of the 2nd International Conference on Intelligent Systems Design and Engineering Applications (ISDEA '12), pp. 992–994, Sanya, China, January 2012. View at: Publisher Site  Google Scholar
 S.Y. Wan and W. E. Higgins, “Symmetric region growing,” IEEE Transactions on Image Processing, vol. 12, no. 9, pp. 1007–1015, 2003. View at: Publisher Site  Google Scholar
 Y. Masutani, K. Masamune, and T. Dohi, “Regiongrowing based feature extraction algorithm for treelike objects,” in Visualization in Biomedical Computing, K. H. Höhne and R. Kikinis, Eds., vol. 1131 of Lecture Notes in Computer Science, pp. 159–171, 1996. View at: Publisher Site  Google Scholar
 T. Zhang, X. Yang, S. Hu, and F. Su, “Extraction of coastline in aquaculture coast from multispectral remote sensing images: objectbased region growing integrating edge detection,” Remote Sensing, vol. 5, no. 9, pp. 4470–4487, 2013. View at: Publisher Site  Google Scholar
 M. W. Ren, J. Y. Yang, and H. Sun, “Tracing boundary contours in a binary image,” Image and Vision Computing, vol. 20, no. 2, pp. 125–131, 2002. View at: Publisher Site  Google Scholar
 L. Cai, L. He, T. Yamashita, Y. Xu, Y. Zhao, and X. Yang, “Robust contour tracking by combining region and boundary information,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 12, pp. 1784–1794, 2011. View at: Publisher Site  Google Scholar
 J. Seo, S. Chae, J. Shim, D. Kim, C. Cheong, and T. Han, “Fast contourtracing algorithm based on a pixelfollowing method for image sensors,” Sensors, vol. 16, no. 3, article 353, 2016. View at: Publisher Site  Google Scholar
 A. Benavoli, L. Chisci, A. Farina, S. Immediata, L. Timmoneri, and G. Zappa, “Knowledgebased system for multitarget tracking in a littoral environment,” IEEE Transactions on Aerospace and Electronic Systems, vol. 42, no. 3, pp. 1100–1116, 2006. View at: Publisher Site  Google Scholar
 G. Zhou, C. Yu, C. Zong, T. Quan, and N. Cui, “Clutterknowledgebased target tracking method in complex conditions,” in Proceedings of the IET International Radar Conference, Guilin, China, April 2009. View at: Google Scholar
 L. Yang, Z. Ning, and Y. Qiang, “Cognitive detector: a new architecture for target detection and tracking in complex environment,” in Proceedings of the 3rd International Conference on Intelligent System and Knowledge Engineering (ISKE '08), pp. 685–688, November 2008. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2016 Bo. Yan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.