Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 8738316, 11 pages
https://doi.org/10.1155/2018/8738316
Research Article

A Multitarget Visual Attention Based Algorithm on Crack Detection of Industrial Explosives

School of Automation Science and Engineering, South China University of Technology, Wushan Rd., Tianhe District, Guangzhou 510640, China

Correspondence should be addressed to Haibo Xu; nc.ude.tucs.liam@199101015102

Received 4 September 2017; Revised 26 January 2018; Accepted 11 February 2018; Published 26 March 2018

Academic Editor: Marco Perez-Cisneros

Copyright © 2018 Haibo Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper is a novel study on crack detection of industrial explosives. The proposed algorithm consists of the following steps: image preprocessing was performed according to the defect features of industrial explosives cartridge, and we developed an improved visual attention based algorithm. This proposed algorithm features a parametric analysis that can be implemented on the image according to the conspicuous maps with the introduction of the concept of defect discrimination ; as compared with other algorithms, our method can realize real-time multitarget detection function; a new analysis method, the IPV-WEN algorithm, was proposed to analyze the cartridge defects based on performance indices. Through comparison and experimentation, it was revealed that this method can achieve a detection accuracy of 97.9%, with detection time of 34.51 ms, which satisfied the requirement in the industrial explosives production.

1. Introduction

With the rapid development of the Chinese economy and the technical innovation in the industry of industrial explosive materials, the production scale of industrial explosives has been continuously expanding. Under such circumstance, the vigorous improvement in the continuity and automation level of the production line has become an inevitable trend. However, due to various factors, such as malfunctioning of automatic industrial explosives packing equipment, quality of the raw materials, and interference to the production environment, numerous defects acquired during the packaging process might be detected on explosive cartridges, which can affect the efficiency and quality of explosives production [13]. Therefore, real-time and efficient defect detection and classification of industrial explosives have become key factors for the improvement of production quality and personnel security.

Crack defect is one of the most common detectable cartridge defects. This is a defect on the surface of the cartridge, which looks normal in shape but has breaks on the surface. It mainly refers to the difference in the crack information between the cartridge and the standard cartridge. It indicates the brightness inconformity between the pixel subset and pixel block within the standard cartridge. It can also be attributed to the uncertainty in the distribution of crack defects and randomly distributed crack scale, depth, and position, thus making the forecasting difficult. Furthermore, the interference from surface text and trademark texture feature also increases the difficulty of defect detection. In the practical production process, numerous cartridges have been found with such defect mainly due to the poor heat-sealing on the side edge. The correlations between the manifestation of the crack defect and illumination intensity are illustrated in Figure 1.

Figure 1: Classification for cartridge defect detection. (a) and (b) images are, respectively, side edge leakage cartridge and port edge leakage cartridge; (c) and (d) images are, respectively, not full cartridge and cartridge crack detection.

As the defects are usually found on cartridge packages during production and packaging of industrial explosive cartridges, this paper aims to provide a solution to the key problem through the introduction of a proper algorithm that can effectively extract the packaging characteristics of the cartridges and locate the defective cartridges, which must be separated from the normal ones. According to the specific conditions of the industrial explosive production line, this paper proposes a visual attention based search strategy to prevent the acquired cartridge image from being affected by the natural light and to reduce the noise interference caused by the following stages. However, due to the random distribution of the salient characteristics and interference from the surface text, there may be the existence of multiple defect targets, which are shown in Figure 1. Therefore this paper adopts an improved visual attention based search algorithm to extract the crack characteristics of the cartridge.

The paper is organized as follows: Section 2 discusses and reviews the related defect detection algorithms. In Section 3, an improved visual attention based algorithm was proposed. This method can be applied to the multitarget crack detection. Simulation experiments are described in Section 4. An analysis algorithm, called image partitioning variance-weighted eigenvalue (IPV-WEV) based algorithm, was proposed in Section 5; and Section 6 presents the conclusion of the paper.

2. The Relevant Visual Attention Algorithms

The research on the machine vision system started very early in foreign countries. Malamas et al. introduced the application of the industrial visual detection system, system composition, main approaches for industrial visual application, and main hardware and software of the system [4]. They also made a general review on the four types of detection applications, namely, dimensional quality, surface integrity, structure quality, and operation quality, according to the detection objects and the procedural features. Moganti et al. made an overview of the application of industrial detection technologies to the manufacture of printed circuit boards [5]. Recently, researches on the application of machine vision technology for the inspection of product quality are becoming increasingly popular. Moreover, due to the higher requirements on product packaging inspection and surface defect detection, positioning, and recognition, this technology has been widely applied to the production of drugs, videos, mechanical parts, electronics, textile products, and so on [6]. Their research mainly focused on feature selection and extraction and classification of feature patterns.

Li et al. proposed a rather robust algorithm that is free from the interference produced by dirt on the surface of the eggshell to extract the cracked eggshell and detect the presence of tiny cracks. Furthermore, it can be used to train the neural network according to the image pixel density histogram [7]. Razmjooy et al. proposed an appearance detection solution with scale measurement [8]. They focused on the application of mathematical methods to the automation, especially equation solving through the design, implementation, and classification of algorithms to make a simple classification of the images based on scales through the binarization processing. Jia et al. modified the image recognition method after analyzing the radiation characteristics of the heating components [9]. They filtered out the infrared radiation interference to the image information obtained with a charge-coupled device camera through a low-pass filter. In addition, Shen et al. designed a new illuminated image recognition system [10]. There are three bearings in an image, and the bearings on the left and right were used to detect the distortion defect, whereas the bearing in the middle was used to detect other defects near the deformation point. Tellaeche et al. proposed two approaches, namely, image segmentation and decision making [11]. First, they segmented the image into multiple regions with the same scale. Then, they used the support vector machine (SVM) classifier to analyze the features after the extraction of the features and attributes of every region. The results of the analysis determined the presence or absence of weeds in a certain area. In dimensional measurement and shape detection, the object scale was measured based on an image to estimate if the scale is within the permitted tolerance. They are used to detect if the shape or scale of the object conforms to the requirement. With regard to the applications of dimensional measurement and shape detection, Jiménez et al. proposed the identification of fruit on the trees based on the shape recognition algorithm [12]. He used the laser ranging method to locate the positions of the fruits for automatic harvesting. According to the model-based computer vision method, Magee and Seida designed and implemented a detection system that can be used to estimate the shape of the object in a complex industrial environment [13].

Operation quality detection was implemented to verify if an accurate operation has been performed on the tested products as per manufacturing processes or standards. To make a classification that is suitable for riding modes, Gao and Duan started by describing the main characteristics of different images according to distance and then applied SVM [14].

3. An Improved Visual Attention Based Algorithm

There are two processes that influence visual saliency: top-down and bottom-up processes. Visual attention based algorithm reveals the mechanisms of biological visual intelligence [1519]. It has wide applications in various locations as reported by some computer vision researchers [2026]. Li et al. analyzed the visual saliency in frequency domain and employed hypercomplex Fourier transform algorithm [27]. In recent years, some new computational models based on visual saliency were proposed in the welding industry, agriculture, food inspection, and so on [2832]; however, it is fruitless by graph-based visual saliency (GBVS) [33] when the object is a gray-level image, as in Figure 2.

Figure 2: False detection. (a) and (c) are source images ((a) image has one defect region, whereas (c) image has two defect regions). (b) and (d) are the detection results by GBVS.

Feature extraction refers to the extraction of cartridge crack defect characteristics. In this section, an improved visual attention based algorithm is adopted to extract the defect characteristics of the diagnostic cartridge. This algorithm can simulate the neural structure and behaviors of the visual system of primates in their early lives. It demonstrates powerful capacity in the real-time processing of complex scenes. With the integration of multiscale image features into a topological saliency map, it can quickly select the prominent positions through an efficient computing method. Then, it can identify the crack defect on cartridge after further detailed analysis. Currently, all of the common visual attention based algorithms are mainly used to detect the defects or scratches among textural properties of products with neat textures, such as ceramic tile, cotton, or cloth. In this case, the global feature extraction algorithm was chosen as the processing algorithm to process the image before the application of visual attention based algorithm. Furthermore, this algorithm makes the calculation mainly based on the following two characteristic parameters in statistics, namely, mean value and standard deviation, according to the different window scales. The application of visual attention based algorithm, along with the specific procedures, is shown in Figure 3.

Figure 3: The proposed visual attention model.

We established a feature-based model according to the following three performance indices, namely, edge, intensity, and orientation. Regarding the brightness feature, the pyramid operators were instructed to generate a nine-scale diagram, which ranged from 1 : 1 (0 scale) to 1 : 256 (8 scales) in the range of 8 octaves. Then, it was implemented through the calculation of the difference between the fine and coarse scales. Meanwhile, we used the Gabor and Roberts operators [33] separately to generate a direction feature pyramid and an edge feature pyramid for the image. The image pyramid is a simple but efficient tool used to interpret image through a multiresolution method. It was applied for image compression and machine vision in the early stages. The high-definition images were located at the bottom of the pyramid. Moreover, the pyramid level positively correlates with image resolution.

We employed Gaussian pyramid, Gabor operator, and Roberts operator to extract the intensity, orientation, and edge features, respectively. The feature template was calculated based on the center-surround operator. Center extraction can be implemented through the calculation of the difference between the fine and coarse scales. For the center-surround operator, normalization can play a role in weakening the similarity between the images and increasing the difference according to the following principle: firstly, the values of the computed image were normalized by the range of , and the maximum value and the mean of other local maximums were calculated in an image. Then, the whole image was multiplied with . Through the above operation, the feature image of intensity, edge, and orientation was obtained.

Through multiscale combination, the acquired feature templates were processed based on multiscale composition operators. Then, normalization was employed after the image interpolation from coarse to fine scale and a subtraction calculation on a point-to-point basis. We adopted the discrimination fusion operator to integrate the previously mentioned three conspicuous maps of edge (), intensity (), and orientation () into a saliency map SM as follows: where represents the discrimination fusion operator and denotes normalization operator.

Although it is a simple image fusion method with the adoption of linear weighted fusion operator in an Itti model, it can lead to the omission of lots of information during the fusion. Therefore, this paper proposes a defect discrimination-based fusion operator and defines the defect discrimination as the degree of differentiation between the defect region and the other regions in an image. An increase in the value of reflects the higher gray value of the defect region with a larger area, whereas the lower gray value denotes the nondefect region with a smaller area. Thus, an increase in the value of also indicates the higher weight and the greater contribution to the image fusion process. In this case, the defect discrimination-based fusion operator can be defined asIn different conspicuous maps, is different as well, which refers to different coefficient weights. Therefore, in the fusion process, different conspicuous maps provide different contributions to the saliency map SM. The specific methods for the calculation of are shown in (3).

The image was segmented into ( is a natural number) region sets according to the gray value. Furthermore, it was assumed that the randomly created region () was smoothly closed. is a function of the coordinates and of a pixel. Based on the characteristics of a defect region with large intensity area and regional edge, (the intensity discrimination), (the orientation discrimination), and (the edge discrimination) can be defined separately:where denotes the th closed edge of and denotes integral infinitesimal.

4. Experiment

4.1. Image Preprocessing

Firstly, image preprocessing was performed on the original image, and the processing steps for background estimation, image differences, and brightness adjustment were performed as well. An opening algorithm was chosen for background estimation. The simulation results depend on the type of structural element (SE) [34]. The input image is shown in Figure 4, and the image preprocessing is presented in Figures 57.

Figure 4: Input image.
Figure 5: Image preprocessing. Step  1: the first row shows the SE type of a diamond; from left to right, the scale is 5, 10, and 15. The second row shows the SE type of a disk; from left to right, the scale is 5, 10, and 15. The last row shows the SE type of a line (scale 15); from left to right, the slope angle is 0.25π, 0.5π, and 0.75π.
Figure 6: Image preprocessing. Step  2: the first row shows the SE type of a diamond (scale 10); from left to right, the adjustment is 0.5, 1.5, and 2.0. The second row shows the SE type of a disk (scale 10); from left to right, the adjustment is 0.5, 1.5, and 2.0. The last row shows the SE type of a line, with scale of 45 and slope angle of 0.25π; from left to right, the adjustment is 1.0, 1.5, and 2.0.
Figure 7: Multitarget defect image preprocessing.
4.2. Feature Measurement

Through the above-mentioned processes, most of the background interference information contained in the cartridge information was removed. However, the residuary and unprocessed information still include some interference that was comparable to the defect feature. In this part, we made the complex image as the input source image, which included two defect clusters. As shown in Figure 7(a), the image preprocessing adopted the above algorithms; hence, it is still necessary to determine the specific defect features to filter out the interference factors.

To strengthen the character contrast, we built a feature-based model after the image preprocessing and created a pyramid for the three types of features, namely, brightness, direction, and edge, which were ranked in three levels, from Level 0 to Level 3, as the other hierarchies were too small to be graphically displayed. As shown in Figure 8(a), the image resolution dropped with the rise in the pyramid level.

Figure 8: Adopted visual attention algorithm. (a) From first to third row: brightness, direction, and edge. (b) Upper left shows brightness combination, while upper right shows the direction combination. Bottom left shows edge combination, and bottom right shows the operator combination of brightness, direction, and edge.

The center-surround difference algorithm was adopted before the application of the multiscale fusion method. Then, we chose the fusion levels 1–4 to separately obtain the conspicuous maps for brightness, direction, and edge features, which are shown in Figure 8(b). The three conspicuous maps were integrated into a saliency map based on the discrimination fusion operator (3). , , and were 0.5002, 0.2254, and 0.2744, respectively.

5. The Image Partitioning Variance-Weighted Eigenvalue-Based Analysis Method

5.1. Principle

This section proposes an IPV-WEV-based analysis method to realize the simultaneous recognition and positioning of multiple defect positions in the saliency map SM. The design and the implementation of the IPV-WEV method have been elaborated with the specific steps provided as follows.

(1) Image Segmentation. The saliency map was segmented into subimages:where represents the saliency map , and every subimage represents an element of map in a macro point of view. Meanwhile, also represents matrix with and .

(2) The Extraction of Defect Subimages. Due to the characteristic difference among the pixels at the defect and nondefect positions in the saliency map , the variance, which was used to describe the degree of variation between the image pixel value and the mean value, can better reflect the salient features of the defect. It is clear that the variance of the image with defect is greater than that of the image without defect. Hence, the defect position can be determined through the calculation of the subimage variance and the comparison of the mean square error of the whole image according to the specific algorithm provided as follows.

(a) The calculation of the mean value and the variance of the whole saliency map iswhere and separately represent the mean value and the variance of the saliency map .

(b) The calculation of the mean value and the variance of the segmented subimage is

(c) As far as the whole image is concerned, most of the gray values of the pixels are comparable, which, however, is large only in the region with defect. Therefore, the fluctuation of a point, which is the gray value of a single pixel, was lower than that of the whole image. In other words, the variance was small for the segmented subimage, and the fluctuation was not evident if the defect information was not included. The variance of the subimage will be less than that of the whole image. However, if the subimage included some defect features, the gray value fluctuation would be significantly larger. In this case, the variance of the subimage will be greater than that of the whole image. Hence, the discrimination function can be defined as follows:where “YES” denotes the presence of a defect cluster in the subimage. Thus, it is possible to analyze defect features on the subimage according to the discrimination function.

(3) The Calculation of the Weighted Eigenvalue. Through the construction of a weighted covariance matrix according to the principal component analysis conception, this paper calculated the weighted eigenvalue based on the gray value of every pixel to determine the defect position on the saliency map .

(a) The center pixel of the subimage can be defined as

(b) can be calculated as the weighted covariance matrix of the subimage: where , , are defined as (10)–(12), respectively:

(c) According to the formula , , can be calculated as follows:

5.2. Experiment

The experimental simulation consisted of the following two major parts: in the first part, the optimal parameters were mainly determined based on the effect of the parameters on the detection results of the IPV-WEV algorithm; and in the second part, a contrast among the IPV-WEV, region-growing, and the WTA neural network algorithms was presented.

The saliency map is the input image in which the intensity, orientation, and edge were combined. We segmented the input image into different block scales, and the image blocks can be seen in Figures 9-10.

Figure 9: Image segmentation.
Figure 10: Image segmentation (block scale ).

The crack position was obtained through the calculation of the subimage variance and the comparison of the mean square error of the whole image. The experimental simulation revealed that the variance of the saliency map was . In and block scales, the variances of every segmented subblock are presented in Table 1.

Table 1: Variance between and block scale.
5.3. Crack Defect Subimage Analysis
5.3.1. Algorithm Analysis

We adopted the weighted eigenvalue to quantitatively measure and analyze the defect subimage and to calculate the weighted eigenvalue of the extracted defect subimage for the construction of a weighted covariance. The weighted eigenvalues of each subimage were calculated as well to finally determine if the subimage has a defect through the comparison of λ1 and λ2. The defect detection results are illustrated in Figure 11.

Figure 11: The detection results based on the weighted eigenvalues. Eigenvalue 1 and eigenvalue 2 denote , respectively. Note. Numbers 1, 2 denote in subimage ; numbers 3, 4 denote , in subimage ; numbers 5~9 denote , in subimage ; others denote , in subimage .

Table 3 shows the detection time and detection rate for different block scales.

The comparison of the above simulation results with the data results revealed that the defect position can be determined by calculating the eigenvalue of every subimage after the calculation of the variance of every subimage and by comparing the mean square error of the whole image; block scale has an effect on defect position, and, for and image blocks, the detection time will be reduced due to the small number of blocks. Therefore, although it is possible to detect defect features, the defect position in the subimage can only be determined in some extent. Instead, only an approximate defect position can be detected. For the image block, the defect position can be precisely detected (Table 2).

Table 2: Variance between and block scales.
Table 3: Detection efficiency.

(3) When the defects were contained in different subimage blocks, the determination of the defect positions based on the connectivity of the cracks was possible, as the pixels in the crack defect area also constitute the locally connective region. If the detected defects of multiple subimages can be connected through combination, then all these subblocks can be determined on the defect positions. However, if there was a subimage separately located and without any connection to the other subblocks, it can be presumed that this subimage was not on the defect position.

5.3.2. Algorithm Comparison

In this experiment, 150 cartridge samples (positive samples number: 75; negative samples number: 75) were adopted to separately calculate the detection rate and defect detection time through the proposed IPV-WEV algorithm, WTA model [34], SLIC model [30], and region-growing algorithm [32]. The cartridge defect detection accuracy was denoted by the ratio between the wrongly detected samples and the correctly detected samples. The comparison of the detection precisions of the proposed algorithm, WTA model, and region-growing algorithm is presented in Table 4 and Figure 12.

Table 4: Comparison of algorithm defect results.
Figure 12: The precision-recall curve and ROC curve.

The WTA algorithm was used to detect the cracks on the whole defect image through the charge-discharge method. The first region that has been quickly recharged was the defect position. However, as only one defect can be detected each time, detection of multitarget defects must be performed several times. On the other hand, the region-growing algorithm functions by determining a “seed” for region-growing and by combining all the regions that meet the growing conditions to detect the defect position. However, with this method, only one defect can be detected each time as well. For detection of multiple defects, the region-growing algorithm must be grown several times. Actually, due to the characteristic difference between the pixels at the defect and nondefect positions in the saliency image, the variance can better reflect the salient features of the defect. Moreover, as variance was used to describe the degree of variation between the pixel and mean values, an increase in the image variance also indicated a more dispersed distribution of the gray scale. In this case, the variance of the image with defect was evidently greater than that of the image without defect. Therefore, with the proposed IPV-WEV algorithm, only the variance of every image block needs to be calculated. Consequently, based on the comparison between the variance of the image block and the mean square error of the whole image, multiple defects can be simultaneously detected without individually searching for the pixels or “recharging” after the variance was taken as the discrimination function. For this reason, our proposed IPV-WEV algorithm outperformed the other two algorithms in terms of detection time.

6. Conclusion

The main objective of this paper is to propose a crack detection algorithm for industrial explosives. The proposed algorithm consisted of the following context: image preprocessing was done according to the defect features of the industrial explosive cartridge, and an improved visual attention based algorithm was proposed. This algorithm features parametric analysis that can be implemented on the image according to the conspicuous maps with the introduction of the concept of defect discrimination ; as compared with other algorithms, our proposed method can realize the real-time multitarget detection function; the proposed IPV-WEN algorithm was able to analyze the cartridge defects based on performance indices. The comparison and experiment among algorithms revealed that the proposed method can achieve a detection accuracy of 97.9%, with the detection time of 34.51 ms, which has satisfied the requirement in the industrial explosives production.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Authors’ Contributions

Haibo Xu conceived and designed the study. Buhai Shi performed the experiments. Haibo Xu and Qingming Zhang reviewed and edited the manuscript. All authors read and approved the manuscript.

References

  1. S. Z. He, “Research and development of twin screw pump in emulsified explosive packing machine,” Process Equipment & Piping, 2005. View at Google Scholar
  2. S. U. Ming-Yang, “Image of twin-screw-type filling machine on characteristics of packaged emulsion explosives,” Engineering Blasting, 2010. View at Google Scholar
  3. S. Li, C. Lu, Y. Cai, and J. Gui, “Study on improvement of emulsified explosive packing process,” Explosive Materials, 2012. View at Google Scholar
  4. E. N. Malamas, E. G. M. Petrakis, M. Zervakis, L. Petit, and J.-D. Legat, “A survey on industrial vision systems, applications and tools,” Image and Vision Computing, vol. 21, no. 2, pp. 171–188, 2003. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Moganti, F. Ercal, C. H. Dagli, and S. Tsunekawa, “Automatic PCB inspection algorithms: a survey,” Computer Vision and Image Understanding, vol. 63, no. 2, pp. 287–313, 1996. View at Publisher · View at Google Scholar · View at Scopus
  6. X.-F. Ding, L.-Z. Xu, X.-W. Zhang, F. Gong, A.-Y. Shi, and H.-B. Wang, “A model of saliency-based selective attention for machine vision inspection application,” in Proceedings of the International Conference on Adaptive and Natural Computing Algorithms, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. Y. Li, S. Dhakal, and Y. Peng, “A machine vision system for identification of micro-crack in egg shell,” Journal of Food Engineering, vol. 109, no. 1, pp. 127–134, 2012. View at Publisher · View at Google Scholar · View at Scopus
  8. N. Razmjooy, B. S. Mousavi, and F. Soleymani, “A real-time mathematical computer method for potato inspection using machine vision,” Computers & Mathematics with Applications, vol. 63, no. 1, pp. 268–279, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. Z. Jia, B. Wang, W. Liu, and Y. Sun, “An improved image acquiring method for machine vision measurement of hot formed parts,” Journal of Materials Processing Technology, vol. 210, no. 2, pp. 267–271, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. H. Shen, S. X. Li, D. Y. Gu, and H. X. Chang, “Bearing defect inspection based on machine vision,” Measurement, vol. 45, no. 4, pp. 719–733, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. A. Tellaeche, G. Pajares, X. P. Burgos-Artizzu, and A. Ribeiro, “A computer vision approach for weeds identification through support vector machines,” Applied Soft Computing, vol. 11, no. 1, pp. 908–915, 2011. View at Publisher · View at Google Scholar · View at Scopus
  12. A. R. Jiménez, R. Ceres, and J. L. Pons, “A vision system based on a laser range-finder applied to robotic fruit harvesting,” Machine Vision and Applications, vol. 11, no. 6, pp. 321–329, 2000. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Magee and S. Seida, “An industrial model based computer vision system,” Journal of Manufacturing Systems, vol. 14, no. 3, pp. 169–186, 1995. View at Publisher · View at Google Scholar · View at Scopus
  14. Z. Gao and L. Duan, “Vision detection of vehicle occupant classification with legendre moments and support vector machine,” in Proceedings of the 2010 3rd International Congress on Image and Signal Processing (CISP '10), pp. 1979–1983, October 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998. View at Publisher · View at Google Scholar · View at Scopus
  16. J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Proceedings of the 20th Annual Conference on Neural Information Processing Systems (NIPS '06), pp. 545–552, December 2006. View at Scopus
  17. Q. Zhang, H. Liu, J. Shen, G. Gu, and H. Xiao, “An improved computational approach for salient region detection,” Journal of Computers, vol. 5, no. 7, pp. 1011–1018, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. P.-J. Hsieh, J. T. Colas, and N. Kanwisher, “Pop-out without awareness: unseen feature singletons capture attention only when top-down attention is available,” Journal of Vision, vol. 22, no. 9, pp. 1220–1226, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. L. Itti and C. Koch, “Computational modelling of visual attention,” Nature Reviews Neuroscience, vol. 2, no. 3, pp. 194–203, 2015. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Cong and Y. Yan, “Application of human visual attention mechanism in surface defect inspection of steel strip,” China Mechanical Engineering, vol. 22, no. 10, pp. 1189–1221, 2011. View at Google Scholar · View at Scopus
  21. G. Li, H. Luo, M. Tang, H. Mu, and Z. Zhou, “A machine vision inspection algorithm for contamination in cotton based on visual attention mechanism,” Application of Electronic Technique, 2012. View at Google Scholar
  22. S. Liu, Z. Cao, and J. Li, “A SVD-based visual attention detection algorithm of SAR image,” Lecture Notes in Electrical Engineering, vol. 246, pp. 479–486, 2014. View at Publisher · View at Google Scholar · View at Scopus
  23. Y. Liu, L. Chen, and W. Shi, “Applications of an algorithm of image preprocessing based on visual attention mechanisms in industrial inspection,” Electronic Science & Technology, 2016. View at Google Scholar
  24. M. Mancas, C. Mancas-Thillou, B. Gosselin, and B. Macq, “A rarity-based visual attention map—application to texture description,” in Proceedings of the Image Processing, 2006 IEEE International Conference, vol. 44, pp. 445–448, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  25. R. Pal, “Applications of visual attention,” Innovative Research in Attention Modeling & Computer Vision Applications, 2016. View at Google Scholar
  26. X. Wang, B. Wang, and L. Zhang, “Airport detection in remote sensing images based on visual attention,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface, vol. 7064, no. 3, pp. 475–484, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. J. Li, M. D. Levine, X. An, X. Xu, and H. He, “Visual saliency based on scale-space analysis in the frequency domain,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 4, pp. 996–1010, 2013. View at Publisher · View at Google Scholar · View at Scopus
  28. Y. He, Y. Chen, Y. Xu, Y. Huang, and S. Chen, “Autonomous detection of weld seam profiles via a model of saliency-based visual attention for robotic arc welding,” Journal of Intelligent & Robotic Systems, vol. 81, no. 3-4, pp. 395–406, 2016. View at Publisher · View at Google Scholar · View at Scopus
  29. M. M. Cheng, G. X. Zhang, N. J. Mitra, X. Huang, and S. Hu, “Global contrast based salient region detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '11), pp. 409–416, Providence, RI, USA, June 2011. View at Publisher · View at Google Scholar · View at Scopus
  30. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274–2281, 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. A. Aboudib, V. Gripon, and G. Coppin, “A biologically inspired framework for visual information processing and an application on modeling bottom-up visual attention,” Cognitive Computation, vol. 8, no. 6, pp. 1007–1026, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. J. K. Garner and D. M. Russell, The Symbolic Dynamics of Visual Attention During Learning: Exploring the Application of Orbital Decomposition, Springer International Publishing, 2016. View at Publisher · View at Google Scholar
  33. G. E. Kalliatakis, T. Kounalakis, G. Papadourakis, and G. A. Triantafyllidis, “Image based touristic monument classification using Graph Based Visual Saliency and Scale-Invariant Feature Transform,” in Proceedings of the IASTED International Conference on Computer Graphics and Imaging (CGIM '12), pp. 261–266, June 2012. View at Publisher · View at Google Scholar · View at Scopus
  34. P. Bourgine and A. Lesne, Morphological and Mutational Analysis: Tools for the Study of Morphogenesis, Springer, Berlin, Germany, 2011. View at Publisher · View at Google Scholar