Mobile Information Systems

Mobile Information Systems / 2021 / Article
Special Issue

Artificial Intelligence and Edge Computing in Mobile Information Systems

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 9940801 | https://doi.org/10.1155/2021/9940801

Ye Zhang, Qiu Xie, Canlin Zhang, "Key Algorithms for Segmentation of Copperplate Printing Image Based on Deep Learning", Mobile Information Systems, vol. 2021, Article ID 9940801, 10 pages, 2021. https://doi.org/10.1155/2021/9940801

Key Algorithms for Segmentation of Copperplate Printing Image Based on Deep Learning

Academic Editor: Sang-Bing Tsai
Received11 Mar 2021
Revised10 May 2021
Accepted13 May 2021
Published26 May 2021

Abstract

As a branch of the field of machine learning, deep learning technology is abrupt in various computer vision tasks with its powerful functional learning functions. The deep learning method can extract the required features from the original data and dynamically adjust and update the parameters of the neural network through the backpropagation algorithm so as to achieve the purpose of automatically learning features. Compared with the method of extracting features manually, the recognition accuracy is improved, and it can be used for the segmentation of copperplate printing images. This article mainly introduces the research on the key algorithm of the copperplate printing image segmentation based on deep learning and intends to provide some ideas and directions for improving the copperplate printing image segmentation technology. This paper introduces the related principles, watershed algorithm, and guided filtering algorithm of copperplate printing image synthesis process and establishes an image segmentation model. As a result, a deep learning-based optimization algorithm mechanism for the segmentation of copper engraving printing images is proposed, and experimental steps such as main color extraction in the segmentation of copper engraving printing images, adaptive main color extraction based on fuzzy set 2, and main color extraction based on fuzzy set 2 are proposed. Experimental results show that the average processing time of each image segmentation model in this paper is 0.39 seconds, which is relatively short.

1. Introduction

Image segmentation is one of the key issues in computer vision, digital image processing, and other fields. Image segmentation refers to the segmentation of the target image with relatively consistent features according to the consistency of the image attributes and the target image, so that the features of the same subregion have certain similarities and differences. The characteristics of the subregions show obvious differences. Finally, the target to be decomposed can be separated from the background and output the target. Currently, the number of image segmentation methods has reached thousands.

In recent years, with the continuous improvement of computer processing performance and computing power, especially the huge development of artificial intelligence, deep learning has achieved great success in many fields such as speech recognition, natural language processing, and computer vision. Different from traditional pattern recognition methods, the technology based on deep neural network does not need to manually design features; that is, it can automatically extract features with good information expression layer by layer through training, eliminating the tedious steps of manually extracting features. Therefore, the recognition of copperplate printing image segmentation has been slowly applied to deep learning.

Chen et al. found that classification is one of the most popular topics in hyperspectral remote sensing. In the past two decades, experts have proposed many methods to deal with the classification of hyperspectral data, but most of them did not extract hierarchically. Chen et al. introduced the concept of deep learning to the classification of hyperspectral data for the first time, first by following the classification based on classical spectral information to verify the qualifications of stacked autoencoders; secondly, a new method of spatially dominant information classification was proposed. A novel deep learning framework is proposed to integrate these two functions, from which the highest classification accuracy can be obtained. The framework is a mixture of principal component analysis, deep learning architecture, and logistic regression. Specifically, as a deep learning architecture, stacked autoencoders are designed to obtain useful advanced features. This research is relatively novel, but lacks experimental data support [1]. Postalcolu uses a convolutional neural network for image recognition. The dataset is named Fruits-360. 70% of the images are selected as training data. The remaining images are used for testing. The image size is set to 100 × 100 × 3. The training uses momentum stochastic gradient descent (sgdm), adaptive moment estimation (Adam), and root mean square boost (rmsprop) techniques. The training threshold is determined to be 98%. When the accuracy rate reaches 98% or more, the training is stopped. Use the trained network to calculate the final verification accuracy. This research is relatively one-sided and not practical [2]. Guo et al. found that adding a spatial penalty term to the fuzzy c-means (FCM) model is an important method to reduce the impact of noise in the image segmentation process. Although these algorithms improve the robustness to noise to a certain extent, they still have some shortcomings. First of all, they are usually very sensitive to the parameters that should be adjusted according to the intensity of the noise; secondly, in the case of uneven noise, it is obviously unreasonable to use constant parameters for different image regions, which usually leads to undesirable segmentation results. In order to overcome these shortcomings, Guo et al. proposed an adaptive FCM based on noise detection for image segmentation in this research. The new algorithm uses two image filtering methods to denoise and maintain detailed information. The variance of the gray values in each community is measured to calculate the parameters used to balance the two parts. This research procedure is relatively complicated and is not suitable for popularization in practice [3].

The innovations of this paper are (1) proposing a deep learning-based optimization algorithm mechanism for the segmentation of copper engraving printing images and (2) proposing the main color extraction in the etching image segmentation, the adaptive main color extraction based on fuzzy set 2, and main color extraction based on fuzzy set 2.

2.1. Related Principles of Copperplate Printing Image Synthesis Process

The printed matter must be as consistent as possible with the provided copper engravings; that is, the reproduction looks the same as the original under all lighting conditions, rather than achieving the “same spectrum” effect reproduction [4]. Usually, the transparent film provided by the picture provider has a larger color gamut than the printing technology application. When converting the image information to the printed matter, a compromise must be taken. Perform color gamut mapping according to the color management system, ignore and compress image feature information, or add spot colors. The spot colors are not in the image color separation information, and separate color plates are generated [5].

In the entire printing process, the original is decomposed into different colors, and the level, color, and definition are adjusted according to the printing needs and then screened, and finally, the printing equipment is overprinted color by color on the substrate to complete the color synthesis to obtain the final printed product to achieve color restoration and reproduction [6]. In color image synthesis, color separation is based on subtractive color mixing. In multicolor printing, halftone dots are independent of each other, but they also overlap. On paper, both subtractive color mixing (overprinting of monochromatic halftone dots) and additive color mixing (the comprehensive effect of monochromatic independent halftone dots observed by the observer) will occur [7].

2.2. Watershed Algorithm
2.2.1. Overview

The watershed algorithm compresses the input image to a topographic map, and the gray value of the digital image corresponds to the altitude in the topographic map. The higher the gray value, the higher the corresponding altitude. Continuously changing pixels constitute elements such as mountains, basins, elevations, and topographical changes in topography. They are undulating and continuous, just like gradient changes in graphics [8].

The realization process of the watershed algorithm can be understood according to the process from the rise of the water level to the flooding of the mountain. First, suppose that the groundwater level gradually rises from the deepest underground, and the first geographic location to be touched is the depression of the basin at the lowest altitude. As the water level continues to rise, the water level in the depressions of the basin will gradually rise, slowly filling the entire basin [9]. At this time, a zero-height dam was built around the basin to prevent water from overflowing in the basin. Then, the water level continued to rise, the dam was submerged, and the water in the basin passed over the dam on the edge of the basin and merged with the water in the adjacent basin. Finally, the water level continued to rise until the highest mountain was submerged, and the water level stopped rising. Here, the block-shaped area enclosed by the dam is called a stagnant basin, and the edge (or dam) of the stagnant basin is called a watershed [10].

2.2.2. Algorithm Steps

The image is composed of pixels, and each pixel has a specific value. Let the pixel in the image be p, the largest and smallest pixel value points in the image are denoted by , and the basin is denoted by C [11].

The threshold function is expressed as :where I (p) is the image function.

The basin function is expressed aswhere N is the local minimum in the image area.

The path distance function L (x, y) is expressed aswhere I is the path between any two points x and y in the divided area [12, 13].

2.2.3. Algorithm Improvement

The unified processing of the entire image is likely to confuse elements such as different targets, front and back scenes, and produce opposite effects. In order to further enhance the target and make the small target in the huge image clearer, this paper uses the watershed calculation to obtain the target contour not for preliminary segmentation, but for unified enhancement of the image in the region to enhance the difference between different regions [14]. Regarding the target area G obtained by the watershed algorithm, it is regarded as a target. There are the following processing methods for the target G:where c is 255, θ is the contrast correction parameter, and the value of θ is different for different colors of the area. In order to make the correction force in the same target area the same and the correction direction to be uniform, this paper confirms the θ value according to the distribution probability of the gray mean value:

Among them, k is the gray level threshold [15, 16].

2.3. Guided Filtering Algorithm
2.3.1. Overview

The purpose of the segmentation algorithm used in this chapter is to segment the copper engravings as a whole, and the color distribution of the copper engravings is often continuous in a certain range. Therefore, image-guided filtering is selected to denoise the segmented image to facilitate the extraction of the copper engravings as a whole [17]. Image-guided filtering is an edge-preserving filter that can smooth the image while maintaining the boundary. This algorithm needs a guided image when filtering. The guided image can be another image of the same size, or the original image itself [18]. When the original image itself is used as the guide image, the guided filtering can filter the image while keeping the edges. When another image is used as the guided image, the realization is to constrain the original image based on the gradient in the other image filtering operation. Guided filtering is widely used in image denoising, image dehazing, and detail smoothing [19].

2.3.2. Algorithm Steps

The guided filtering technique assumes that the filtered output image is q and the guided image is I, and there is a certain local linear relationship between the two [20]. Under this assumption, the linear transformation is given as follows:

Among them, is a square neighborhood window centered on pixel k. This paper takes 3 × 3. From formula (6), it can be found that the output image only has edges where there is a gradient in the guiding image [21, 22]. Define the output image q, which is obtained by subtracting some unnecessary noise elements n from the image p to be filtered:

Minimizing the difference between q and p, while satisfying the constraints of formula (6), can get the result of guided filtering. The most critical problem in guided filtering is to calculate the optimal solution of [23]. In order to obtain the best effect of image optimization while maintaining the local linear model in formula (6), the difference between the image p to be filtered and the output image q of the guided filtering is expressed by the minimized cost function [24]. The following formula exists:

In order to prevent the value from being too large, ε is introduced as a regularization parameter [25]. According to linear regression analysis, the optimal solution of can be obtained:

Since guided filtering has an overlapping window averaging strategy, this method can well consider the correlation of local information. The overlapping window averaging strategy means that a certain pixel i of the output image is included in many overlapping windows. The pixel value of this point is related to the value of all the windows covered. The value of the center of different windows is different. Therefore, it is necessary to calculate the mean value of all q obtained [26]. Assuming that the pixel point is i, the filtered output can be expressed as

2.4. Image Segmentation Model

The image has many different characteristics such as texture, spatial structure, and color. According to these characteristics, the image is divided into different areas, so that each area has its own characteristics, and there are different characteristics between areas. Image segmentation allows us to conduct further research on different regions [27]. Image segmentation is a very basic and important step in image processing. It is very important to understand the subsequent recognition and analysis, and it is also a bridge for image processing in image segmentation. The result of image segmentation directly affects the processing conditions of subsequent machines. Therefore, image segmentation is one of the important research topics in the current image technology field. Image segmentation algorithms are also diverse and are widely used in practical applications such as computer vision, pattern recognition, aesthetic image processing, and industrial fields [28].

Define an image on the area . Based on the segmentation results of the U-net model based on coarse calibration and the level set method, this paper proposes the following image segmentation model, denoted as Model I:where is the probability density function of pixel x inside and outside the object and can be calculated according to the segmentation result of the U-net model based on coarse calibration [29, 30]. I (x) is the image intensity value, and the function is defined aswhere is a Gaussian kernel with standard deviation σ and it is defined as

H (u) and δ (u) are Heaviside function and Delta function; they are defined as follows:

The method part of this paper adopts the above method to study the key algorithm for the segmentation of copper engraving printing image based on deep learning. The specific process is shown in Table 1.


Key related algorithms for segmentation of copperplate printed images based on deep learning

2.1Correlative principles of color image synthesis process for copperplate printing2.2Watershed algorithm2.3Guided filtering algorithm2.4Image segmentation model
1Overview1Overview
2Algorithm steps2Algorithm steps
3Algorithm improvement

3. Experiment on the Key Algorithm of Copper Engraving Printing Image Segmentation Based on Deep Learning

3.1. Optimization Algorithm Mechanism of Copper Engraving Printing Image Segmentation Based on Deep Learning
3.1.1. Attention Mechanism

The attention mechanism comes from the study of human vision. It usually refers to the fact that under the obstacles of information dissemination, people selectively pay attention to a part of all information, while ignoring other visible information is processing information. Researchers introduce this method into the field of computer vision information processing, such as traditional local image feature extraction and image detection. In the deep learning technology, the attention mechanism is usually used as an independent neural network bundle to determine the strictly different weights of different inputs or different parts.

3.1.2. Attention Mechanism in Spatial Domain

The spatial transformer network (STN) model proposes a unit called spatial transformation, which converts spatial sector information into images, so that key information can be derived. The spatial conversion is actually the realization of the attention mechanism, because the trained spatial converter can find the area where the image information needs to be paid attention to. At the same time, it can have the rotation function and the zoom function, so that the important part of the image can be converted and extracted. And this unit can be directly added to the original network structure as a new level.

The space conversion unit can identify the basic information of the superior tag. The information table is a distinguishable table because the information on each target point is actually a combination of all the information on the origin. In theory, such a unit can be added at any level because the unit can process channel information and matrix information at the same time.

3.1.3. Channel Domain Attention Mechanism

The principle of the channel zone attention mechanism can be understood from the perspective of signal conversion. In the analysis of signal systems, any signal can be written as a linear combination of semitone waves. After time-frequency transformation, a continuous half ion wave signal can be used to replace the frequency signal value in the time domain.

3.2. Image Segmentation of Copperplate Printing
3.2.1. Main Color Extraction

Color is the subjective feeling produced by external light stimulation acting on human visual organs. Therefore, the adaptive nature of human visual system must be considered. Studies have shown that the performance of colors depends on local contrast rather than absolute contrast values. It can be seen that the color characteristics of an image not only depend on the image itself, but also on the observer’s visual system and observation experience. Therefore, designing a main color extraction method that combines the attributes of the image and the human visual perception plays an important role in improving the segmentation effect of the copperplate printing image.

The method of obtaining the main color of the image based on the adaptive color image segmentation of fuzzy set 2 is the multithreshold method based on histogram to obtain the correlation (spatial information) between pixels through fuzzy identity, that is, the local information of the pixels, and make full use of the advantages of fuzzy theory, effectively reserving more low-level information for high-level processing. Therefore, this is a fuzzy threshold segmentation method that considers both global and local information of the image.

3.2.2. Adaptive Primary Color Extraction Based on Fuzzy Set 2

In the process of image processing and recognition, the human visual characteristics must be fully considered. The imaging process of an image is a process of mapping from many to one, so the image itself has many uncertainties and imprecision, that is, blur. The change of the image from black to white is also fuzzy, which is difficult to distinguish for human visual perception. This kind of uncertainty and inaccuracy is mainly reflected in the uncertainty of the image gray level, the uncertainty of the geometric shape, the uncertainty knowledge, etc., which cannot be solved by the classic mathematical theory. Fuzzy theory studies this uncertainty and imprecision and provides effective new processing technology for intelligent information processing. In the research, people found that fuzzy theory has a good ability to describe the uncertainty of images. Therefore, fuzzy theory can be introduced as a model and method to effectively describe image characteristics and human visual characteristics, analyze human judgment and perception, and recognize images and other behaviors.

Fuzzy sets are just like describing nonfuzzy phenomena with classical set theory; they are descriptions of fuzzy phenomena. Corresponding to the domain of the classical set theory, in the fuzzy theory, the domain X is used to represent the value range of the research object. This paper proposes an adaptive multithreshold technology based on fuzzy set 2. Through the analysis of the histogram, the histogram peak value is automatically obtained, assuming that each peak is a pixel class, then the number of peaks is the number of pixel classes N, thereby determining the threshold number as N-1. Using the relationship between the number of categories and the number of thresholds, the shape of the fuzzy curve is controlled by adjusting the membership window width to achieve a fully adaptive multithreshold technology.

3.2.3. Main Color Extraction Based on Fuzzy Set 2

Most of the fuzzy closed value methods in the past did not consider the correlation between image pixels, and the local information of the pixels was lost in the process of processing. Therefore, this paper considers the correlation between image pixels by defining fuzzy identity. Combined with the abovementioned multithreshold technology, an adaptive color image segmentation technology based on fuzzy set 2 is used to extract the main color. First, obtain the fuzzy identity histogram of each color component of the image, and then apply the multithreshold technology based on fuzzy set 2 to obtain the closed value set of each component, then cluster the image according to the threshold set, and finally perform the necessary area merging to prevent oversegmentation phenomenon.

This part of the experiment proposes that the above steps are used in the research experiment of the key algorithm for the segmentation of copperplate print images based on deep learning. The specific process is shown in Table 2.


Research experiment on the key algorithm of copper engraving printing image segmentation based on deep learning3.1Optimization algorithm mechanism of copper engraving printing image segmentation based on deep learning1Attention mechanism
2Spatial attention mechanism
3Channel domain attention mechanism
3.2Image segmentation of copperplate printing1Primary color extraction
2Adaptive dominant color extraction based on fuzzy set 2
3Main color extraction based on fuzzy set 2

4. Results of Key Algorithms for Image Segmentation of Copperplate Printing Based on Deep Learning

4.1. Antijamming Performance Analysis of the Algorithm for Gaussian White Noise

Gaussian white noise is a common noise in images, which obeys Gaussian distribution. In the image, Gaussian noise appears at almost every point, but its noise depth is random. In this experiment, we first segment the image, then add Gaussian white noise of different intensities to the image, and then use the algorithm in this paper to obtain the segmentation results. The noise intensity is 0.01–0.10, and each noise intensity is separated by 0.01. In order to further illustrate the anti-interference of the algorithm in this paper against Gaussian white noise, the global consistency error between the original image segmentation result and the noise-added image segmentation result is calculated as shown in Table 3 and Figure 1.


Noise intensityGlobal consistency error

0.010.0127
0.020.0089
0.030.0135
0.040.0106
0.050.0174
0.060.0156
0.070.0145
0.080.0221
0.090.0136
0.100.0157

It can be seen from Table 3 and Figure 2 that as the intensity of Gaussian white noise continues to increase, the highest global consistency error between the original image segmentation result and the noise image segmentation result is 0.0221, which is almost negligible, indicating that this algorithm is correct Gaussian white noise having strong anti-interference.

4.2. Algorithm Target Number Analysis

In the definition of the target size in this article, based on the analysis of the dataset, the ISAID dataset is sorted in ascending order according to the pixels occupied by the target. According to the size of the pixels occupied by the target, among the 655451 targets contained in the dataset, the target size from the smallest tens of thousands of pixels to the largest tens of millions of pixels, we take one million pixels (1000 pixels × 1000 pixels) as the unit of the pixel points occupied by the target area, and calculate the target number as shown in Table 4 and Figure 2.


PixelNumber of targets

1000 × 10006827
2000 × 20004246
3000 × 30007531
4000 × 40006201
5000 × 50005946
6000 × 60007962
7000 × 70008145
8000 × 800011942

In order to make the classification of the target size reasonable, the number of targets under each category is evenly distributed, this article classifies the pixels that the target occupies less than 3000 pixels × 3000 pixels as small targets; the number of pixels occupied is greater than 3000 pixels × 3000 pixels less than 7000 pixels × 7000 pixels are classified as medium targets; the rest are large targets.


1.01.11.21.31.41.5

Number of iterations when PA reaches maximum298518461027143917532461

Number of iterations when IoU reaches maximum352727141429237431073345


DatasetWatershed algorithmGuided filtering algorithmImage segmentation model

Building0.7610.6510.762
Car0.6540.6780.796
Tree0.8110.7040.814
Pedestrian0.6460.7490.731
Painting0.7930.8120.837


Processing timeWatershed algorithmGuided filtering algorithmImage segmentation model

Sample 10.410.420.37
Sample 20.420.430.34
Sample 30.510.480.41
Sample 40.440.520.39
Sample 50.490.470.42
Average0.450.460.39

4.3. Algorithm Processing Analysis
(1)A large number of numerical experiments have been carried out in this paper, and it is found that the selection of the parameter p of in model I has a great impact on the experimental results. The selection of different values will make the model’s evaluation indicators PA and IoU reach the maximum. The number of iterations is different. Therefore, this paper fixes other parameters and selects different values to repeat the experiment. The experimental results are recorded and compared to select the appropriate value. The test results of the experiment are shown in Table 5 and Figure 3.According to the above experimental results, we choose the value of parameter p to be 1.2, which can reduce the number of iterations of the model and achieve better performance of the model. In this case, the segmentation result of the copper engraving printing image is shown in Figure 4 (picture from Baidu Encyclopedia).(2)In the training process of this paper, the weight of the network model on the ImageNet dataset is used as the pretraining model, and the weights of the deep learning part of the model are initialized. This article uses the Adam optimizer algorithm and cross-entropy loss function for training, and the comparative experimental test results are shown in Table 6 and Figure 5.It can be seen from the table and chart that the image segmentation neural network proposed in this paper is tested on the ImageNet dataset, and the overall segmentation effect is relatively good, and the intersection of two categories is relatively high. The image segmentation neural network in this paper has a better segmentation effect on the ImageNet dataset, which proves the feasibility of the algorithm in this paper for the ImageNet dataset.(3)Quantitative evaluation of several algorithms, statistics, and analysis of the processing time of each algorithm for the image segmentation of copper engraving printing is carried out. The processing time is shown in Table 7 and Figure 6.

It can be seen from the experimental results that, in terms of segmentation speed, the watershed algorithm, guided filter algorithm, and image segmentation model have little difference in processing speed for each sample, indicating that these algorithms have good anti-interference capabilities. The average processing time of each image segmentation model is 0.39 seconds, and the processing speed is relatively fast.

5. Conclusions

Image segmentation methods based on neural network technology have attracted the most attention of researchers in the field of vision. Improved algorithms, new methods, and new tools are constantly emerging, especially the recent deep learning technology, which has become an emerging topic in the field of image segmentation. This article discusses the current mainstream image segmentation methods. Aiming at the research status of deep learning technology, this paper analyzes and summarizes the related algorithms of copperplate print image segmentation. This article is mainly inspired by traditional methods combined with deep learning methods and made a little improvement. Although the improvement of this article has been significantly improved, it is an attempt on a relatively simple network with relatively few convolutional layers. In the future research, it is necessary to conduct deeper research and prove the influence of the relatively complex network of the convolutional layer.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that that they have no conflicts of interest regarding the publication of this paper.

References

  1. Y. Chen, Z. Lin, X. Zhao et al., “Deep learning-based classification of hyperspectral data,” Institute of Electrical and Electronics Engineers Journal of Selected Topics in Applied Earth Observations & Remote Sensing, vol. 7, no. 6, pp. 2094–2107, 2017. View at: Google Scholar
  2. S. Postalcolu, “Performance analysis of different optimizers for deep learning-based image recognition,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 34, no. 2, pp. 1–6, 2020. View at: Google Scholar
  3. F. F. Guo, X. X. Wang, and J. Shen, “Adaptive fuzzy c‐means algorithm based on local noise detecting for image segmentation,” Iet Image Processing, vol. 10, no. 4, pp. 272–279, 2016. View at: Publisher Site | Google Scholar
  4. X. Hao, G. Zhang, and S. Ma, “Deep learning,” International Journal of Semantic Computing, vol. 10, no. 3, pp. 417–439, 2016. View at: Publisher Site | Google Scholar
  5. G. Litjens, T. Kooi, B. E. Bejnordi et al., “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, no. 9, pp. 60–88, 2017. View at: Publisher Site | Google Scholar
  6. S. Levine, P. Pastor, A. Krizhevsky et al., “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” International Journal of Robotics Research, vol. 37, no. 4-5, pp. 421–436, 2016. View at: Google Scholar
  7. Y. J. Cha, W. Choi, G. Suh et al., “Autonomous structural visual inspection using region‐based deep learning for detecting multiple damage types,” Computerided Civil and Infrastructure Engineering, vol. 33, no. 4, pp. 1–17, 2018. View at: Publisher Site | Google Scholar
  8. X. Zhang and D. Wang, “Deep learning based binaural speech separation in reverberant environments,” Institute of Electrical and Electronics Engineers/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 5, pp. 1075–1084, 2017. View at: Publisher Site | Google Scholar
  9. W. Li, H. Fu, L. Yu et al., “Stacked Autoencoder-based deep learning for remote-sensing image classification: a case study of African land-cover mapping,” International Journal of Remote Sensing, vol. 37, no. 23-24, pp. 5632–5646, 2016. View at: Publisher Site | Google Scholar
  10. Q.-S. Zhang and S.-C. Zhu, “Visual interpretability for deep learning: a survey,” Frontiers of Information Technology & Electronic Engineering, vol. 19, no. 1, pp. 27–39, 2018. View at: Publisher Site | Google Scholar
  11. A. Impallaria, F. Tisato, F. Petrucci, M. Dal Colle, and E. Ruggio, “La paleta de un artista veneciano del siglo XVI: materiales y métodos de Giovanni da Mel,” Ge-Conservacion, vol. 11, no. 11, pp. 230–236, 2017. View at: Publisher Site | Google Scholar
  12. L. Zhuo, Z. Geng, J. Zhang, and X. G. Li, “ORB feature based web pornographic image recognition,” Neurocomputing, vol. 173, no. 3, pp. 511–517, 2016. View at: Publisher Site | Google Scholar
  13. B. Zhou and Y. Cheng, “Fault diagnosis for rolling bearing under variable conditions based on image recognition,” Shock and Vibration, vol. 2016, no. 1, pp. 1–14, 2016. View at: Publisher Site | Google Scholar
  14. J. Fang, K. Wang, and Y. Huang, “Weld pool image recognition of humping formation process in high speed GMAW,” Hanjie Xuebao/Transactions of the China Welding Institution, vol. 40, no. 2, pp. 42–46, 2019. View at: Google Scholar
  15. O. SeongbinC.-H. Kim, S. Lee, B.-G. Park, and J.-H. Lee, “Grayscale image recognition using spike-rate-based online learning and threshold adjustment of neurons in a thin-film transistor-type NOR flash memory array,” Journal of Nanoscience and Nanotechnology, vol. 19, no. 10, pp. 6055–6060, 2019. View at: Google Scholar
  16. S. Long and X. Zhao, “Smart teaching mode based on particle swarm image recognition and human-computer interaction deep learning,” Journal of Intelligent & Fuzzy Systems, vol. 39, no. 4, pp. 5699–5711, 2020. View at: Publisher Site | Google Scholar
  17. C.-M. Noh, K.-K. Kim, S.-B. Lee, D.-H. Kang, and J.-C. Lee, “A study on safety helmet detection using image recognition algorithm,” Korean Journal of Computational Design and Engineering, vol. 25, no. 4, pp. 350–357, 2020. View at: Publisher Site | Google Scholar
  18. J. J. Winston, G. F. Turker, U. Kose, and D. J. Hemanth, “Novel optimization based hybrid self-organizing map classifiers for Iris image recognition,” International Journal of Computational Intelligence Systems, vol. 13, no. 1, pp. 1048–1058, 2020. View at: Publisher Site | Google Scholar
  19. D. J. Hemanth, J. Anitha, and V. E. Balas, “Fast and accurate fuzzy C-means algorithm for MR brain image segmentation,” International Journal of Imaging Systems and Technology, vol. 26, no. 3, pp. 188–195, 2016. View at: Publisher Site | Google Scholar
  20. A. Parsi, A. Ghanbari Sorkhi, and M. Zahedi, “Improving the unsupervised LBG clustering algorithm performance in image segmentation using principal component analysis,” Signal, Image and Video Processing, vol. 10, no. 2, pp. 301–309, 2016. View at: Publisher Site | Google Scholar
  21. Z. Sipeng, J. Wei, and S. Shin’Ichi, “Multilevel thresholding color image segmentation using a modified artificial bee colony algorithm,” IEICE Transactions on Information and Systems, vol. 101, no. 8, pp. 2064–2071, 2018. View at: Google Scholar
  22. X. Zhou, R. Zhao, F. Yu, and H. Tian, “Intuitionistic fuzzy entropy clustering algorithm for infrared image segmentation,” Journal of Intelligent & Fuzzy Systems, vol. 30, no. 3, pp. 1831–1840, 2016. View at: Publisher Site | Google Scholar
  23. Y. Liu, J. Yang, B. Guo, J. Yang, and X. Zhang, “A novel image segmentation combined color recognition algorithm through boundary detection and deep neural network,” International Journal of Multimedia and Ubiquitous Engineering, vol. 11, no. 2, pp. 331–342, 2016. View at: Publisher Site | Google Scholar
  24. O. A. Samorodova and A. V. Samorodov, “Fast implementation of the Niblack binarization algorithm for microscope image segmentation,” Pattern Recognition and Image Analysis, vol. 26, no. 3, pp. 548–551, 2016. View at: Publisher Site | Google Scholar
  25. Y. Chao, M. Dai, K. Chen, P. Chen, and Z. Zhang, “A novel gravitational search algorithm for multilevel image segmentation and its application on semiconductor packages vision inspection,” Optik, vol. 127, no. 14, pp. 5770–5782, 2016. View at: Publisher Site | Google Scholar
  26. M. S. R. Naidu and K. P. Rajesh, “Multilevel image thresholding for image segmentation by optimizing fuzzy entropy using firefly algorithm,” International Journal of Engineering and Technology, vol. 9, no. 2, pp. 472–488, 2017. View at: Google Scholar
  27. J. Zhao, X. Wang, H. Zhang, J. Hu, and X. Jian, “Side scan sonar image segmentation based on neutrosophic set and quantum-behaved particle swarm optimization algorithm,” Marine Geophysical Research, vol. 37, no. 3, pp. 229–241, 2016. View at: Publisher Site | Google Scholar
  28. K. Ghathwan and A. J. Mohammed, “Intelligent bio-inspired whale optimization algorithm for color image based segmentation,” Pertanika Journal of Science and Technology, vol. 28, no. 4, pp. 1389–1411, 2020. View at: Google Scholar
  29. O. J. Al-Furaiji, N. Anh Tuan, and V. Y. Tsviatkou, “A new fast efficient non-maximum suppression algorithm based on image segmentation,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 19, no. 2, pp. 1062–1070, 2020. View at: Publisher Site | Google Scholar
  30. R. Kumari, N. Gupta, and N. Kumar, “Cumulative histogram based dynamic particle swarm optimization algorithm for image segmentation,” Indian Journal of Computer Science and Engineering, vol. 11, no. 5, pp. 557–567, 2020. View at: Publisher Site | Google Scholar

Copyright © 2021 Ye Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views263
Downloads316
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.