Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2018, Article ID 3238140, 11 pages
https://doi.org/10.1155/2018/3238140
Research Article

A Novel Saliency Detection Method for Wild Animal Monitoring Images with WMSN

1School of Technology, Beijing Forestry University, Beijing 100083, China
2Chongqing Mobile Communications Limited Company, Chongqing 404100, China

Correspondence should be addressed to Junguo Zhang; nc.ude.ufjb@ougnujgnahz

Received 9 February 2018; Accepted 18 April 2018; Published 6 June 2018

Academic Editor: Francesco Dell'Olio

Copyright © 2018 Wenzhao Feng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We proposed a novel saliency detection method based on histogram contrast algorithm and images captured with WMSN (wireless multimedia sensor network) for practical wild animal monitoring purpose. Current studies on wild animal monitoring mainly focus on analyzing images with high resolution, complex background, and nonuniform illumination features. Most current visual saliency detection methods are not capable of completing the processing work. In this algorithm, we firstly smoothed the image texture and reduced the noise with the help of structure extraction method based on image total variation. After that, the saliency target edge information was obtained by Canny operator edge detection method, which will be further improved by position saliency map according to the Hanning window. In order to verify the efficiency of the proposed algorithm, field-captured wild animal images were tested by using our algorithm in terms of visual effect and detection efficiency. Compared with histogram contrast algorithm, the result shows that the rate of average precision, recall and F-measure improved by 18.38%, 19.53%, 19.06%, respectively, when processing the captured animal images.

1. Introduction

Preservation of wild animal is crucial for the balance and stability of the whole ecosystem. However, the phenomenon of excessive hunting and killing of wild animal around the world is serious [1]. Over 300 kinds of terrestrial vertebrates are in an endangered state according to preliminary statistics. Saliency region detection are capable of effectively extracting the wild animal region information. Besides, it can provide an option for scanning and matching important target regions in wild animal detection and recognition [2, 3]. Hence, saliency region detection for wild animal images is becoming more and more significant in animal protection realm, which has become a focus of recent researches.

Traditional wild animal detection and recognition method [4] mainly use the collected images of wild animal as a test while a training set for learning and training purpose. These experimental samples need to treated by several screening and preprocessing since they generally contain complete, clear, and low-noise image features. However, traditional detection algorithms cannot effectively process captured animal images during wild animal monitoring mission due to the character of complex background and nonuniform illumination that exist in original images. Therefore, proposing an appropriate and effective detection method is a crucial prerequisite to solve the existing problem.

At present, visual saliency detection technique [5] can quickly and automatically extract the main image information and remove the redundant background information, which has won wide attention from both domestic and foreign researchers [68]. Most visual attention-related saliency detection methods are based on the foundation of biological theory [911]. However, these algorithms have low saliency image resolutions, and their computational complexity is high at the same time. Another popular method is based on the basis of model analysis [1215]. Although it has a good detection efficiency and the detection result coordinates well with the human eye characteristics, it cannot effectively process rich texture information in monitoring scenes. Saliency object detection [1618] can efficiently separate the salient objects from image background. Klein et al. developed a salient object detection based on the standard structure with the help of cognitive visual attention models, and Yong et al. presented a framework that models semantic contexts for key-frame extraction based on wild animal images. Nevertheless, most existing algorithms can only process images with simple background and ordinary resolution features. Besides, they are not applicable to field-captured images in practical wildlife monitoring mission.

Therefore, this paper aims at the demands of actual wild animal monitoring and focuses on solving the problem of high resolution, complex background, and nonuniform illumination that exist in wild animal image saliency detection research.

In this paper, a sample library of wild animal monitoring images was established. The dataset contains images captured from Saihan Ula Nature Reserve in Inner Mongolia province using WMSN monitoring system. The WMSN monitoring system was configured and developed independently in the laboratory with wireless remote, real-time, precise, and meticulous modules. The image dataset covers 12 species and 1000 HD wild animal images. We have established a standard ground-truth image library through manual stamp operation. Based on the field monitoring images, we proposed an improved histogram contrast detection method. The correctness and validity of this method are shown by implementing to the wild animal images.

The contribution of the present study included the following: (1) developed the wild animal monitoring system based on WMSN to capture experimental materials in Saihan Ula Nature Reserve in Inner Mongolia province; (2) established actual wild animal monitoring image database with the unique characteristics; (3) a novel saliency detection method was introduced in this paper for wild animal images with high resolution, complex background, and nonuniform illumination.

2. Wild Animal Monitoring Based on WMSN

2.1. Wild Animal Monitoring System

Traditional wild animal monitoring methods include crewed field survey, GPS (global positioning system) collar [19], infrared camera [20], and satellite remote sensing monitoring [21]. However, these methods have defects such as limited monitoring range, data acquisition lag, and unmeasured local microinformation.

Wireless multimedia sensor network (WMSN) is mainly used to capture the wild animal image materials utilizing industrial grade cameras with terminal node equipment embedded. The wild animal monitoring system which achieves remote, real-time, all-weather, and friendly monitoring goals consists of WMSN terminal nodes, coordination nodes, gateway nodes, and data storage center (back-end sever). The detailed configurations are shown in Figure 1. The monitoring node devices developed by our laboratory are based on ZigBee network protocols. Detailed parameters are shown in Table 1.

Figure 1: Wild animal monitoring system.
Table 1: Parameters of WMSN node.

The monitoring node devices using ZigBee network protocols established a wireless image sensor network in a self-organizing way. When wild animals enter the monitoring view field, the infrared sensor embedded in terminal node will trigger the camera to capture images. Captured images were firstly saved in SD card. Then these images will be transmitted to the coordination node via multihop method. After the coordination nodes successfully receive and converging transmitted image data information from all terminal nodes, the monitoring image information will be transmitted to data center through gateway node utilizing 4G signal by wireless and remote way.

2.2. Fieldwork Material Collection

The WMSN monitoring system for wild animal monitoring was deployed in Saihan Ula National Nature Reserve in Inner Mongolia, which is subordinative to the Greater Khingan mountains. The experiment area with temperate semihumid rainy climate has an average altitude of 1000 m above the sea level. Wild animals collected in the experimental area include Cervus elaphus, Lynx, Capreolus pygargus, Sus scrofa, and Naemorhedus goral. Cervus elaphus and Lynx are national secondary protected animals (shown in Figure 2). In this paper, 1600 images of more than 12 wild animal species are acquired, and the total image data storage volume is 2.4 G.

Figure 2: Wild animal monitoring images in Saihan Ula Nature Reserve.

By analyzing the captured images, we found that most images have complex background, nonuniform illumination, and different image target ratio features. Those image features will cause effects on the saliency detection work, especially in the regions with large grayscale gradients.

3. Method Analysis

The improved histogram contrast area detection method is proposed to process the wild monitoring images with high resolution and complex background in this section. Due to the particularity of materials, both structure extraction and edge detection are introduced in this paper. As shown in Figure 3, we firstly implemented the image structure extraction method based on the image total variation to extract the structure of input images, which aims to smooth the image texture and reduce image noise. Then the saliency detection method-based histogram contrast is implemented to capture the color saliency information of the image. By quantifying the input image to a small color range, the calculation procedure becomes simple and the computation efficiency can be improved. Finally, the edge detection and the position saliency map are synthesized to obtain the final optimization results.

Figure 3: Process of wild animal image saliency detection.
3.1. Structure Extraction Based on Image Total Variation

During the first step, we have not assumed or manually determined the type of textures, as the patterns could vary a lot in different examples. We applied image window total variation and internal variation to the structure extraction method.

Firstly, the window total variation and of image samples pixel in and directions are obtained by where denotes the 19 × 19 square region centered at pixel . denotes the Gaussian filter kernel function in which and are the pixel coordinates and means scale parameter of the function, controlling the spatial scale of the window.

To help distinguish prominent structures from the texture elements, the window internal variation is calculated to extract the prominent structures from the texture elements according to

Finally, the optimized model with window total variation and internal variation is established. With the optimization result, the contrast between texture and structure of the visually salient areas can be further enhanced. where denotes extracted structure image while denotes the input image. The term makes the input and result not deviate. is the smoothness coefficient of the image, and is a small positive number to avoid division by zero.

3.2. Histogram Contrast Saliency Detection

The structure extraction can smooth the texture and reduce the image noise, therefore the extraction result could be used as input of saliency detection. Then the input images are quantified according to the number of quantify channels CN. The main color is arranged into a color matrix by histogram statistics. After, the image pixels are reordered by color value, such that the terms with the same color value are grouped together. The saliency value between different colors is calculated and expressed as shown in (4). The saliency values are the same when the color of the pixels is the same. denotes the color distance metric between the pixel and in Lab space. where is the color value of the in structure extraction image, and represents the saliency value of . denotes the total color numbers of input image and represents the ratio of the pixels whose color value is to the total pixel numbers in the image.

Color quantization greatly simplifies calculation procedure, but similar colors may be quantized to different values during the process. In order to reduce noisy saliency results caused by such randomness, we replace the saliency value of each color by the weighted average of the saliency value of similar colors. where the equation denotes the distance between the color and its nearest colors. Typically, is quarter of color numbers in the images after quantification.

Then the saliency area is obtained by comparing appearance frequency [22] of the first n kinds color. If the appearance frequency is greater than CR (color retention rate), the color with low frequency is discarded subsequently and replaced with the closest color. The color appearance frequency of the first n kinds color will be increased in accordance with the number of quantify channels until it is greater than CF.

3.3. Edge Detection Based on Canny Operator

The edge detection process aims to measure the convolution of the Gaussian smoothing filter and the above saliency detection result to obtain the most optimized approximation operator. where denotes convolutional result, refers to convolution function, and is the position of the pixel in saliency result.

Then the partial derivative is obtained by calculating the first-order finite difference of the filter result.

Among them, represents the gradient partial derivative of image in direction, and is the gradient partial derivative in direction. Therefore, the pixel amplitude matrix and gradient direction matrix are calculated as shown in the following equations:

Finally, nonmaximal value suppression is completed through seeking amplitude maximum of the matrix along the gradient direction. The pixels with maximum amplitude are considered as the edge pixel. To make the image edge close, this paper selects double appropriate threshold (high threshold and low threshold). As a consequence, the nonedge points that do not satisfy the threshold condition are removed. Then the connected domain is expanded to get the final edge detection result.

3.4. Synthesis and Optimization

In this section, we optimize the center position weight of the two-dimensional images with Hanning window function established by center-edge and contrast degree theory. The one-dimensional Hanning window function is constructed as follows. where denotes the input data length, . Consequently, in the two-dimensional Hanning window construction function is the scalar product of two one-dimensional function,

On the basis of position saliency map, the saliency and edge detection based on structure extraction are introduced to obtain more accurate saliency detection results, where denotes the synthetic saliency map and coefficient . Smap refers to saliency detection result of histogram contrast. Emap is the edge detection result by image structure extraction. Hmap is the constructed position saliency map.

3.5. Experimental Parameters Selection

The color value of a single pixel in RGB image ranges from 0 to 2563, therefore the number of colors that needs to be processed will reach 107 during the color saliency calculation process. In this paper, the color value of each channel (R, G, and B) is quantified to 0–12 (CN) so that the number of colors that need to be calculated will reduce to 123 = 1728. In order to ensure the smoothing effects, the filter function scale parameter is taken as 3 after several single-variable experiments. The number of iterations is 3, and smoothness coefficient λ is taken as 0.015. In addition, the color retention rate CF is 0.95 to obtain high-frequency colors in saliency area. The position saliency map whose size is 300 × 400 is shown in Figure 4.

Figure 4: Position saliency map.

4. Comparison and Discussion

In order to verify the effectiveness of the proposed algorithm, we selected the images of actual wild animal monitoring images and public image library as experimental sample and compared the result of our algorithm with other saliency detection algorithms.

Both precision and recall rate [13] are used as objective criteria to evaluate the accuracy of saliency detection. Precision/recall is the ratio of correctly detected salient region to the detected/“ground truth” salient region which means that the precision rate is the ratio of the correct area and the true saliency area of saliency object detected by the visual saliency model.

The recall rate is the ratio of the correct area and the detected saliency area that are calculated by the saliency model in saliency map. where is the true saliency area and is the corresponding image index. is the calculated saliency area.

4.1. Experiment to Wild Animal Images

The field samples of wild animal monitoring are selected from private image database with different light intensities, capture distances, and backgrounds due to seasonal variations. Six classical saliency detection algorithms consisting of CA (context aware) [23], SEG (segment) [24], LLV (low level vision) [25], WT (wavelet transform) [26], SR (spectral residual) [27], HC (histogram contrast) [28], and five classical object detection algorithms consisting of GS (global saliency) [29], FD (frequency domain) [30], GP (gestalt principle) [31], MC (multiscale contrast) [32], and DF (dynamic feature) [13] are compared with our proposed algorithm (SHC) to verify the detection effect in this section.

As the results shown in Figures 5 and 6, the detection method proposed in this paper is more accurate than above classical algorithms in detecting the object areas. The algorithm in this paper preserves a better quality of edge information, and its object area is more smooth as the images containing rich color and complex background.

Figure 5: Visual comparison of saliency detection in wild animal monitoring images.
Figure 6: Visual comparison of object detection in wild animal monitoring images.

We set the segmentation threshold from 0 to 255, and the average precision and recall rate of all the images of private image database are shown in Figure 7.

Figure 7: Precision-recall rate curves.

The precision and recall rate of our algorithm is higher than other eleven alternatives. We believe that the main structure extraction utilized in our algorithm successfully suppresses the influence of the texture information on the detection result. The relatively high recall rate tends to have higher precision rate. The edge detection and the position saliency map are further improved both in uniform and smooth aspects.

In addition, as all the pixels in the saliency map are considered to be foreground, where the segmentation threshold is 0, all algorithms tend to have the same precision and recall rate (precision rate is about 0.1, recall rate is 1.0).

According to formula (14) and (15), the evaluation indicators of precision and recall rate are negative correlation. Therefore, we use F-measure (also known as F-score) to evaluate the effectiveness of saliency detection algorithms. F-measure value is the harmonic mean parameter calculated from precision and recall rate by a certain weight that can be obtained by

The F-measure is an overall performance measurement, among them, is the weight parameter for controlling the precision and recall rate. The smaller means less important the precision is, which is set as herein.

We also introduce an adaptive threshold that is image saliency dependent, instead of using a constant threshold for each image. The adaptive threshold is used to segment the saliency detection results as follows. where is the obtained threshold value. and are the width and height of the saliency map, respectively. denotes the saliency map and refers to the corresponding coordinate of the saliency map.

After obtaining the segmentation result based on the saliency map, the precision, recall rate, and F-measure of all segmentation results are calculated. Taking their average, respectively, as the comparison result by using different methods and the detailed results are shown in Figure 8.

Figure 8: Average precision, recall, and F-measure.

As shown in Figure 6, the average performance of the proposed algorithm is more efficient than the other six algorithms. Among them, the average precision rate, recall rate, and F-measure of detected results of our algorithm are 0.4895, 0.7321, and 0.5300 (shown in Tables 2 and 3), which increased by 18.38%, 19.53%, and 19.06%, respectively, while comparing with the HC algorithm. Although the average precision rate of SEG algorithm is higher than our algorithm, its higher computational complexity makes it not efficient.

Table 2: Detailed data with different algorithms in saliency detection.
Table 3: Detailed data with different algorithms in object detection.

We have compared the average running time of each algorithm, and the comparison table is shown in Table 4. All experiments are performed using MATLAB (R2014a) on the workstation with Intel (R) Core (TM) i3-2330 and 4 GB RAM.

Table 4: Mean time comparison of detection.

SR algorithms cost the least calculation time because they used domain transformation and simple filtering to get the saliency map. However, the detection accuracy and quality of the above two algorithms are not satisfactory. Compared with the HC algorithm, the saliency detection result of our algorithm is better despite slightly higher calculation time. Above results show that our algorithm is more suitable for the application of wild animal image saliency detection.

5. Conclusion

In this paper, we proposed a novel saliency detection algorithm based on histogram contrast for wild animal monitoring images, which can be used for dealing with high resolution, rich colors, and high noise. The proposed method consists of four steps, namely, structure extraction, saliency detection, edge detection, and synthesis optimization. Firstly, the structure extraction is required to smooth image texture and reduce image noise. Saliency detection using histogram contrast aims to extract the area of wild animal from images. Then canny operator is further implemented to edge detection to obtain complete saliency target edge information. Finally, Hanning window is applied to make saliency areas prominent. To demonstrate the efficiency and validation of the proposed method, the images from field-captured wild monitoring database are processed. The final result shows that the proposed algorithm has better performance than existing classical algorithms, especially for captured wild animal monitor images. Compared with the classical detection algorithms, the average precision rate, recall rate, and F-measure of detected results obtained by our algorithm are increased, respectively, by 18.38%, 19.53%, and 19.06% for the wild animal images when compared with the HC algorithm.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was financially supported by National Natural Science Foundation of China (Grant no. 31670553), Fundamental Research Funds for the Central Universities (Grant no. 2016ZCQ08), and Import Project under China State Forestry Administration (Grant no. 2014-4-05).

References

  1. B. Hori, R. J. Petrell, G. Fernlund, and A. Trites, “Mechanical reliability of devices subdermally implanted into the young of long-lived and endangered wildlife,” Journal of Materials Engineering and Performance, vol. 21, no. 9, pp. 1924–1931, 2012. View at Publisher · View at Google Scholar · View at Scopus
  2. P. Manipoonchelvi and K. Muneeswaran, “Region-based saliency detection,” IET Image Processing, vol. 8, no. 9, pp. 519–527, 2014. View at Publisher · View at Google Scholar · View at Scopus
  3. Y. Xue, R. Shi, and Z. Liu, “Saliency detection using multiple region-based features,” Optical Engineering, vol. 50, no. 5, article 057008, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. A. G. Villa, A. Salazar, and F. Vargas, “Towards automatic wild animal monitoring: identification of animal species in camera-trap images using very deep convolutional neural networks,” Ecological Informatics, vol. 41, pp. 24–32, 2017. View at Publisher · View at Google Scholar · View at Scopus
  5. K. Duncan and S. Sarkar, “Saliency in images and video: a brief survey,” IET Computer Vision, vol. 6, no. 6, pp. 514–523, 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Zeppelzauer, “Automated detection of elephants in wildlife video,” Eurasip Journal on Image and Video Processing, vol. 2013, no. 1, 2013. View at Publisher · View at Google Scholar
  7. P. Kamencay, T. Trnovszky, M. Benco, R. Hudec, P. Sykora, and A. Satnik, “Accurate wild animal recognition using PCA, LDA and LBPH,” in Proceedings of the 11th International Conference on ELEKTRO, pp. 62–67, Strbske Pleso, Slovakia, 2016. View at Publisher · View at Google Scholar · View at Scopus
  8. T. N. Vikram, M. Tscherepanow, and B. Wrede, “A saliency map based on sampling an image into random rectangular regions of interest,” Pattern Recognition, vol. 45, no. 9, pp. 3114–3124, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Rahman, D. Houzet, D. Pellerin, S. Marat, and N. Guyader, “Parallel implementation of a spatio-temporal visual saliency model,” Journal of Real-Time Image Processing, vol. 6, no. 1, pp. 3–14, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. Q. Lin, X. G. Xu, Y. Z. Zhan, and D. A. Liao, “Extracting regions of interest based on visual attention model,” in 2011 International Conference on Multimedia Technology, pp. 313–316, Hangzhou, China, 2011. View at Publisher · View at Google Scholar · View at Scopus
  12. J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Proceedings of the 20th Annual Conference on Neural Information Processing Systems, NIPS 2006, pp. 545–552, Vancouver, Canada, 2006.
  13. T. Liu, Z. Yuan, J. Sun et al., “Learning to detect a salient object,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 2, pp. 353–367, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. W. Chen, T. Sun, M. Li, H. Jiang, and C. Zhou, “A new image co-segmentation method using saliency detection for surveillance image of coal miners,” Computers & Electrical Engineering, vol. 40, no. 8, pp. 227–235, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. P. Subudhi and S. Mukhopadhyay, “A fast texture segmentation scheme based on active contours and discrete cosine transform,” Computers & Electrical Engineering, vol. 62, pp. 105–118, 2017. View at Publisher · View at Google Scholar · View at Scopus
  16. D. Klein and S. Frintrop, “Center-surround divergence of feature statistics for salient object detection,” in 2011 International Conference on Computer Vision, pp. 2214–2219, Barcelona, Spain, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. S. P. Yong, J. D. Deng, and M. K. Purvis, “Wildlife video key-frame extraction based on novelty detection in semantic context,” Multimedia Tools and Applications, vol. 62, no. 2, pp. 359–376, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. L. Yang, G. Yang, Y. Yin, and R. Xiao, “Sliding window-based region of interest extraction for finger vein images,” Sensors, vol. 13, no. 3, pp. 3799–3815, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. J. M. L. Pérez, M. E. A. de la Varga, J. J. García, and V. R. G. Lacasa, “Monitoring lidia cattle with GPS-GPRS technology; a study on grazing behaviour and spatial distribution,” Veterinaria Mexico, vol. 4, no. 4, 2017. View at Publisher · View at Google Scholar
  20. A. Fernández-Caballero, M. López, and J. Serrano-Cuerda, “Thermal-infrared pedestrian ROI extraction through thermal and motion information fusion,” Sensors, vol. 14, no. 4, pp. 6666–6676, 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. R. Handcock, D. Swain, G. Bishop-Hurley et al., “Monitoring animal behaviour and environmental interactions using wireless sensor networks, GPS collars and satellite remote sensing,” Sensors, vol. 9, no. 5, pp. 3586–3603, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. K. Smet, W. R. Ryckaert, M. R. Pointer, G. Deconinck, and P. Hanselaer, “Colour appearance rating of familiar real objects,” Color Research & Application, vol. 36, no. 3, pp. 192–200, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 10, pp. 1915–1926, 2012. View at Publisher · View at Google Scholar · View at Scopus
  24. E. Rahtu, J. Kannala, M. Salo, and J. Heikkilä, “Segmenting salient objects from images and videos,” in Proceedings of the 11th European Conference on Computer Vision, ECCV 2010, pp. 366–379, Heraklion, Crete, Greece, 2010. View at Publisher · View at Google Scholar · View at Scopus
  25. N. Murray, M. Vanrell, X. Otazu, and C. A. Parraga, “Saliency estimation using a non-parametric low-level vision model,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2011, pp. 433–440, Providence, RI, USA, 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. N. Imamoglu, W. Lin, and Y. Fang, “A saliency detection model using low-level features based on wavelet transform,” IEEE Transactions on Multimedia, vol. 15, no. 1, pp. 96–105, 2013. View at Publisher · View at Google Scholar · View at Scopus
  27. X. D. Hou and L. Q. Zhang, “Saliency detection: a spectral residual approach,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, Minneapolis, MN, USA, 2007. View at Publisher · View at Google Scholar · View at Scopus
  28. M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S.-M. Hu, “Global contrast based salient region detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 3, pp. 569–582, 2015. View at Publisher · View at Google Scholar · View at Scopus
  29. J. Feng, “Salient object detection for searched web images via global saliency,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3194–3201, Providence, RI, USA, 2012. View at Publisher · View at Google Scholar · View at Scopus
  30. R. Arya, N. Singh, and R. K. Agrawal, “A novel hybrid approach for salient object detection using local and global saliency in frequency domain,” Multimedia Tools and Applications, vol. 75, no. 14, pp. 8267–8287, 2016. View at Publisher · View at Google Scholar · View at Scopus
  31. G. Kootstra and D. Kragic, “Fast and bottom-up object detection, segmentation, and evaluation using gestalt principles,” in 2011 IEEE International Conference on Robotics and Automation, pp. 3423–3428, Shanghai, China, 2011. View at Publisher · View at Google Scholar · View at Scopus
  32. H. Wang, L. Dai, Y. Cai, X. Sun, and L. Chen, “Salient object detection based on multi-scale contrast,” Neural Networks, vol. 101, pp. 47–56, 2018. View at Publisher · View at Google Scholar · View at Scopus