Journal of Sensors

Journal of Sensors / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 7279081 |

Sungho Kim, Kyung-Tae Kim, "Adjacent Infrared Multitarget Detection Using Robust Background Estimation", Journal of Sensors, vol. 2016, Article ID 7279081, 10 pages, 2016.

Adjacent Infrared Multitarget Detection Using Robust Background Estimation

Academic Editor: Guiyun Tian
Received22 Dec 2015
Revised09 Mar 2016
Accepted17 Mar 2016
Published30 Mar 2016


Small target detection is very important for infrared search and track (IRST) problems. Grouped targets are difficult to detect using the conventional constant false alarm rate (CFAR) detection method. In this study, a novel multitarget detection method was developed to identify adjacent or closely spaced small infrared targets. The neighboring targets decrease the signal-to-clutter ratio in hysteresis threshold-based constant false alarm rate (H-CFAR) detection, which leads to poor detection performance in cluttered environments. The proposed adjacent target rejection-based robust background estimation can reduce the effects of the neighboring targets and enhance the small multitarget detection performance in infrared images by increasing the signal-to-clutter ratio. The experimental results of the synthetic and real adjacent target sequences showed that the proposed method produces an upgraded detection rate with the same false alarm rate compared to the recent target detection methods (H-CFAR, Top-hat, and TDLMS).

1. Introduction

Automatic infrared target recognition (ATR) covers automatic target detection (ATD) and classification. If the target area is less than 100 pixels, the detection problem is called infrared small target detection, which is used in air traffic control, air defense, IR surveillance systems, and visible light communications (VLC). In particular, the infrared search and track (IRST) and active protection system (APS) use infrared small target detection systems to protect ships and tanks [1]. IRST systems are omnidirectional surveillance systems for precise infrared target searches, detection and the recognition of threat targets [2]. APS is a recent technology to protect tanks from rocket attacks with a physical counterattack [1].

Small infrared target detection has several research issues, such as dim target detection, reducing the false detection caused by ground clutter, cloud clutter, and sea-glint. The dim target detection issue was solved using the track-before-detect technique (TBD). Previous detection methods generally focused on how to reduce the false detections using either spatial filters [35] or temporal filters [68]. Cloud clutter can be rejected using the mean filter [9], least mean square filter [10, 11], median filter [9], and Top-hat filter [5, 12]. Sun-glint can be removed using frequency information, such as the 3D-FFT spectrum [13], wavelet transform [1416], low pass filter [17], and adaptive high pass filter [18]. Recently, an improved two-dimensional least mean square filter (TDLMS) and Top-hat filter were proposed to increase the detection capability [1921].

These studies had their own advantages and disadvantages in the specific scenarios and environments to reduce the rate of false detection assuming a single target or clearly separated multiple targets. In the practical world, the adjacent multitarget problem frequently exists in IRST and APS scenarios due to the long distance imaging process. The abovementioned approaches generally adopt a fixed threshold or constant false alarm rate (CFAR) detector after clutter, rejecting the filters to minimize the number of false detections. Cao et al. applied a fixed threshold as after subtracting the predicted images from the original one, where denotes the maximum grayscale of an image after background subtraction [20]. Bai et al. used a CFAR-like adaptive threshold as , where and σ denote the average and standard deviation of a gray change map with a constant ε [21]. If the same detector is applied to the closely spaced multiple target scenarios, it fails to detect them. Until now, no one has proposed a suitable small target detection method for closely spaced multiple targets in the IRST and APS problem. Only Fernández attempted to solve a similar problem using superresolution processing with the order-statistics [22]. This study examined how to develop a small target detection method by carefully designing the detector for the adjacent target scenarios, as shown in Figure 1. Two small targets exist in Figure 1(a), where the left target is a decoy and the right target is a true missile. Four neighboring group targets move in the air, as shown in Figure 1(b).

The conventional CFAR detector uses background statistics to set a detection threshold. In this paper, the adjacent multiple target detection problem is solved using the proposed hysteresis-multitarget CFAR (HM-CFAR) detection by inserting an adjacent target rejection block in front of the estimation of the background statistics. Therefore, the contributions of this paper can be summarized as follows. First, the mechanism of a target missing in grouped targets is analyzed. Second, a new closely spaced target detection method is proposed for small infrared target detection applications, such as IRST and APS, by modifying the statistical estimation of the background image. Finally, the effects of the proposed HM-CFAR were found by comparing previous methods for the synthetic database and real target sequence. Section 2 introduces the baseline detection method (H-CFAR) and its limitations by a comparison with a well-known CFAR detector. Section 3 presents the proposed adjacent multitarget detection algorithm. The performance of this method is evaluated in Section 4 and the conclusions are reported in Section 5.

2. Background of the Small Infrared Target Detection Method

Baseline detection method (H-CFAR): in highly cluttered environments, normal infrared small target detection methods adopt the signal processing of spatial filtering followed by a constant false alarm rate- (CFAR-) based thresholding because of its robustness to clutter [19, 24]. Figure 2 summarizes the flow of conventional small target detection. Given an input image (Figure 2(a)), filtering methods were applied to reduce the clutter or enhance the targets (Figure 2(b)). Low thresholding and clustering can produce the candidate target region, as shown in Figure 2(c). Through background cell selection and adaptive thresholding (CFAR), the final targets were detected, as shown in Figures 2(d) and 2(e). Previous approaches focused on the spatial or temporal filtering process to remove the background clutter. This paper focused on the last block, detector, particularly CFAR. The final target detection was made by a threshold (k). If the filtered signal intensity of a target () is larger than k, the target is determined to be detected. The CFAR detector uses the additional information of the background statistics to maintain or reduce the effects of the background clutter. Figure 3 summarizes the operational concept of the CFAR detector by changing the threshold using background statistics (). A new threshold was changed from to . If the background level increases, the threshold increases automatically, which leads to constant false alarms.

Recently, Kim and Lee proposed a hysteresis threshold-based constant false alarm rate (H-CFAR) detector [23]. As shown in Figure 4(a), the original CFAR (O-CFAR) detector probes all the pixels above the noise level [24]. On the other hand, the H-CFAR uses an adaptive hysteresis threshold consisting of a small threshold for candidate detection and a CFAR threshold for the final decision, as shown in Figure 4(b).

The O-CFAR detector searches all the pixels above a thermal noise level. In contrast, the H-CFAR uses a two-step thresholding strategy, the hysteresis threshold. The first small threshold is used for candidate detection and the second threshold is used for the final decision using background statistics.

Limitations of H-CFAR in adjacent multitarget detection: the H-CFAR has a similar target missing problem to the original CFAR method. Figure 5(b) shows the target detection results using H-CFAR for a test image, Figure 5(a). The arrows indicate the ground truth targets and the solid rectangles represent the targets detected by applying the H-CFAR after a modified mean subtraction filter (M-MSF) [23]. The target missing problem originates from the adjacent targets during an estimation of the background statistics (standard deviation). Figure 5(c) illustrates such a phenomenon for the second target in Figure 5(b). Neighboring targets (1st and 3rd) belong to the background cell and increase the standard deviation of the background, which leads to a target missing problem. The problem is solved by the proposed hysteresis-multitarget CFAR (HM-CFAR) detection by inserting a robust target rejection block before estimating the background statistics. This idea is quite simple but powerful for detecting adjacent multiple small targets in infrared images.

3. Proposed Hysteresis-Multitarget CFAR (HM-CFAR) Detection

The proposed method is based on the H-CFAR detection given a spatial filter, such as M-MSF. As shown in Figure 6, an adjacent target pixel rejection block is inserted before the background statistics estimation in the HM-CFAR detector. The M-MSF filter provides center-surround enhancement by subtracting the background image from the prefiltered image given a test image () (see Figure 2(b)). In HM-CFAR, low level thresholding and eight-nearest neighbor- (8-NN-) based clustering is used to find the candidate target region, called the target cell (see Figures 2(c) and 2(d)). The background cell size is determined to be three to four times the size of the target cell. The guard cell is just a blank region that is not used in both regions and is set as a two- or three-pixel gap (see Figure 7(a)).

The probing region is detected as a target if the signal-to-clutter ratio (SCR) is greater than the second threshold , as defined inwhere denotes the maximum target signal obtained using (2), where represents an index set of the target region. This is the same as the maximum contrast between the input signal and the background clutter:

represents the background statistics and the standard deviation (STD). This is the key parameter in a HM-CFAR detector because it can control the detection rate and false alarm rate in the adjacent multitarget scenario. The missed targets can be detected if the parameter () is estimated robustly. As shown in Figure 5(c), the parameter is estimated using the pixels belonging to the background cell. The neighboring target pixels affect the erroneous estimation of the background statistics.

The key idea is to reject the adjacent target pixels in a background cell () before estimating background statistics. The adjacent targets normally appear as bright spots. For example, the dotted square in Figure 5(b) is regarded as a probing target region. The background cell includes two adjacent bright targets that distort the background statistics. Based on this observation, the background parameter is estimated robustly using the rank method defined in where is an index set containing darkest pixels in the background cell. The percentage of adjacent target pixels in the background cell can be calculated as . denotes the standard deviation function and calculates the statistics using background pixels except for the adjacent target pixels. Figure 7 presents the effect of the proposed HM-CFAR in adjacent multitarget detection. The original H-CFAR uses whole background pixels and shows a SCR of 5.5 (Figure 7(a)). In this case, and . On the other hand, HM-CFAR can reject the adjacent target pixels using and showed a SCR of 12.2 for the same target (Figure 7(b)). In this case, and . Note that the correct background statistics were obtained using the proposed new block. As shown in Figure 7(c), the two targets missed in Figure 7(b) were detected correctly using the proposed HM-CFAR method. Therefore, the proposed new block can increase the detection rate by removing the effects of the adjacent targets when calculating the background statistics. Because the final threshold () in (1) controls the detection rate and the false alarm rate, it is tuned depending on the scenarios.

4. Experimental Results

In the first evaluation, the proposed HM-CFAR was compared with the O-CFAR [24] in terms of the detection performance and processing time, as shown in Figure 8. A synthetic image was prepared by background modeling and target modeling. The background image has a sky region and background region with an intensity difference of 100 gray values without clutter, such as cloud or sun-glint. The horizontal line is smoothed further column-wise using a Gaussian filter. A different number of adjacent targets are generated with different sizes and different SCR values, as shown in the top image of Figure 8(a). The detection rate of the proposed HM-CFAR was 100% (17/17) and that of the O-CFAR was 76.5% (13/17) with the same threshold, as shown in the middle and bottom of Figure 8(a). In addition, the processing time was also compared by preparing the test images with a different number of synthetic targets from 10 to 490. Figure 8(b) shows the comparison results. The processing time of the O-CFAR detector takes approximately 16.1 seconds and it increases linearly with increasing number of targets. That of the HM-CFAR detection method takes approximately 0.65 seconds and increases slightly with increasing number of targets. The processing speed of the HM-CFAR is approximately 20 times faster than that of the O-CFAR.

In the second evaluation, the proposed HM-CFAR was compared with previous methods, such as H-CFAR [23], Top-hat [19], and TDLMS [20] for more quantitative comparisons. The input images were filtered using the same spatial filter, M-MSF, in the case of H-CFAR. The Top-hat method used a morphological filter with adaptive thresholding [19]. The TDLMS used a fixed threshold depending on the maximum intensity [20]. The two test image sets were prepared to validate the performance of the proposed method. One is the real infrared image sequences of the Seoul air show, consisting of four F-15K fighters with an adjacent formation flight in strong cloud clutter and acquired using a Cedip, LWIR camera (Set 1). The other was generated using commercial software called OKTAL-SE (Set 2) [25]. OKTAL-SE is the only simulator that can synthesize both passive (IR) and active (Synthetic Aperture Radar). The scenario program can select the background and target trajectory and the SE-RAY-IR then synthesizes the IR sequences using the ray tracing method. For an active protection system (APS) in military applications, two targets (one is the real target; the other is a decoy) were inserted and the incoming target distance was 1.23 km at Mach 6.

The detection performance was compared using the receiver operating characteristic (ROC) curve metric using the detection rate (DR) and false alarm rate (FAR) by varying the adaptive threshold (). The low level threshold was set to 10 in the HM-CFAR and H-CFAR methods. The adaptive threshold and fixed threshold were controlled in Top-hat and TDLMS, respectively. As shown in Figure 9, the proposed method outperforms the others (H-CFAR, Top-hat, and TDLMS) in terms of the ROC curve area for test Sets 1 and 2.

Table 1 lists the statistical performance comparisons of the proposed HM-CFAR and previous H-CFAR [23], Top-hat [19], and TDLMS [20] given the same FAR indicated by the arrows in Figure 9. According to the results, the proposed HM-CFAR produced a much larger number of correct detections than the other comparison methods. Figures 10 and 11 present the adjacent multitarget detection results of the cluttered images, where the small rectangles represent the detection and large rectangles ground truth locations. As indicated by the arrows, the H-CFAR, Top-hat, and TDLMS often missed the adjacent multitargets because they regard the neighboring targets as clutter. Note the superior detection performance of the HM-CFAR-based method in the adjacent multitarget detection scenarios.

DBMethodThresholdDR (%)FAR (#/image)

H-CFAR [23]5.277.713.2
Top-hat [19]6.990.313.2
TDLMS [20]667.113.2

H-CFAR [23]7.571.05.0
Top-hat [19]18.638.05.0
TDLMS [20]6.390.05.0

5. Conclusions

The adaptive threshold-based small target detection method normally uses background statistics to produce constant false alarms. Although the conventional method works well in normal cluttered scenarios, these methods fail to detect multiple adjacent targets because they regard closely spaced targets as background clutters. This paper proposed a new simple but powerful adjacent multitarget detection method for small infrared targets using a novel target pixel rejection using a ranking approach in a background statistics estimation. As validated by a set of experiments, this method can effectively find the true targets with the adjacent formation flight compared with recent methods (Top-hat, TDLMS). The computational overhead introduced by the new block is 0.04 sec/image (baseline: 0.28 sec/image, proposed: 0.32 sec/image). If there is sparse strong clutter, it will increase the number of false detections. Therefore, the proposed method can be used for real-time applications of stationary and moving infrared camera platforms because of the simplicity of the algorithm with powerful detection capability with a spatial image.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.


This work was supported by the 2015 Yeungnam University Research Grants.


  1. T. J. Meyer, Active Protective Systems: Impregnable Armor or Simply Enhanced Survivability?Armor, 1998.
  2. A. N. de Jong, “IRST and perspective,” in Infrared Technology XXI, vol. 2552 of Proceedings of SPIE, pp. 206–213, San Diego, Calif, USA, July 1995. View at: Publisher Site | Google Scholar
  3. S. Kim, “Double layered-background removal filter for detecting small infrared targets in heterogenous backgrounds,” Journal of Infrared, Millimeter, and Terahertz Waves, vol. 32, no. 1, pp. 79–101, 2011. View at: Publisher Site | Google Scholar
  4. H. Sang, X. Shen, and C. Chen, “Architecture of a configurable 2-D adaptive filter used for small object detection and digital image processing,” Optical Engineering, vol. 42, no. 8, pp. 2182–2189, 2003. View at: Publisher Site | Google Scholar
  5. Y. L. Wang, J. M. Dai, X. G. Sun, and Q. Wang, “An efficient method of small targets detection in low SNR,” Journal of Physics, vol. 48, no. 1, pp. 427–430, 2006. View at: Publisher Site | Google Scholar
  6. Y. S. Jung and T. L. Song, “Aerial-target detection using the recursive temporal profile and spatiotemporal gradient pattern in infrared image sequences,” Optical Engineering, vol. 51, no. 6, Article ID 066401, 2012. View at: Publisher Site | Google Scholar
  7. S. Kim, “High-speed incoming infrared target detection by fusion of spatial and temporal detectors,” Sensors, vol. 15, no. 4, pp. 7267–7293, 2015. View at: Publisher Site | Google Scholar
  8. B. Zhang, T. Zhang, Z. Cao, and K. Zhang, “Fast new small-target detection algorithm based on a modified partial differential equation in infrared clutter,” Optical Engineering, vol. 46, no. 10, Article ID 106401, 2007. View at: Publisher Site | Google Scholar
  9. R. C. Warren, Detection of distant airborne targets in cluttered backgrounds in infrared image sequences [Ph.D. thesis], University of South Australia, 2002.
  10. M. S. Longmire and E. H. Takken, “LMS and matched digital filters for optical clutter suppression,” Applied Optics, vol. 27, no. 6, pp. 1141–1159, 1988. View at: Publisher Site | Google Scholar
  11. T. Soni, J. R. Zeidler, and W. H. Ku, “Performance evaluation of 2-D adaptive prediction filters for detection of small objects in image data,” IEEE Transactions on Image Processing, vol. 2, no. 3, pp. 327–340, 1993. View at: Publisher Site | Google Scholar
  12. J.-F. Rivest and R. Fortin, “Detection of dim targets in digital infrared imagery by morphological image processing,” Optical Engineering, vol. 35, no. 7, pp. 1886–1893, 1996. View at: Publisher Site | Google Scholar
  13. A. Kojima, N. Sakurai, and J. I. Kishigami, “Motion detection using 3D-FFT spectrum,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '93), vol. 5, pp. 213–215, Minneapolis, Minn, USA, April 1993. View at: Publisher Site | Google Scholar
  14. R. N. Strickland and H. I. Hahn, “Wavelet transform methods for object detection and recovery,” IEEE Transactions on Image Processing, vol. 6, no. 5, pp. 724–735, 1997. View at: Publisher Site | Google Scholar
  15. G. Boccignone, A. Chianese, and A. Picariello, “Small target detection using wavelets,” in Proceedings of the 14th International Conference on Pattern Recognition, pp. 1776–1778, Brisbane, Australia, August 1998. View at: Publisher Site | Google Scholar
  16. Z. Ye, J. Wang, R. Yu, Y. Jiang, and Y. Zou, “Infrared clutter rejection in detection of point targets,” in Proceedings of the International Conference on Sensors and Control Techniques (ICSC '00), vol. 4077, pp. 533–537, Wuhan, China, June 2000. View at: Publisher Site | Google Scholar
  17. Z. Zuo and T. Zhang, “Detection of sea-surface small targets in infrared images based on multilevel filters,” in Proceedings of the International Symposium on Multispectral Image Processing (ISMIP '98), vol. 3545 of Proceedings of SPIE, pp. 372–377, Wuhan, China, October 1998. View at: Publisher Site | Google Scholar
  18. L. Yang, J. Yang, and K. Yang, “Adaptive detection for infrared small target under sea-sky complex background,” Electronics Letters, vol. 40, no. 17, pp. 1083–1085, 2004. View at: Publisher Site | Google Scholar
  19. J. Zhou, H. Lv, and F. Zhou, “Infrared small target enhancement by using sequential top-hat filters,” in Proceedings of the International Symposium on Optoelectronic Technology and Application 2014: Image Processing and Pattern Recognition, vol. 9301 of Proceedings of SPIE, Beijing, China, May 2014. View at: Publisher Site | Google Scholar
  20. Y. Cao, R. Liu, and J. Yang, “Small target detection using two-dimensional least mean square (TDLMS) filter based on neighborhood analysis,” International Journal of Infrared and Millimeter Waves, vol. 29, no. 2, pp. 188–200, 2008. View at: Publisher Site | Google Scholar
  21. X. Bai, F. Zhou, and T. Jin, “Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter,” Signal Processing, vol. 90, no. 5, pp. 1643–1654, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  22. M. F. Fernández, “Adaptive single-frame superresolution for detecting closely spaced IR targets in clutter,” IEEE Transactions on Aerospace and Electronic Systems, vol. 50, no. 4, pp. 2489–2499, 2014. View at: Publisher Site | Google Scholar
  23. S. Kim and J. Lee, “Small infrared target detection by region-adaptive clutter rejection for sea-based infrared search and track,” Sensors, vol. 14, no. 7, pp. 13210–13242, 2014. View at: Publisher Site | Google Scholar
  24. F.-Y. Xu, G.-H. Gu, and W. Qian, “The research and implementation of CFAR in infrared small target detection,” in Proceedings of the International Symposium on Photoelectronic Detection and Imaging 2011: Advances in Infrared Imaging and Applications, vol. 8193 of Proceedings of SPIE, Beijing, China, May 2011. View at: Publisher Site | Google Scholar
  25. J. Latger, T. Cathala, N. Douchin, and A. Le Goff, “Simulation of active and passive infrared images using the SE-WORKBENCH,” in Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, and Testing, vol. 6543, SPIE, Orlando, Fla, USA, April 2007. View at: Publisher Site | Google Scholar

Copyright © 2016 Sungho Kim and Kyung-Tae Kim. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.