Scientific Programming for Industry 5.0: Theory, Applications, and Technological DevelopmentView this Special Issue
Identification of Crop Diseases and Insect Pests Based on Deep Learning
In order to solve the problems of many kinds of crop diseases and pests, fast diffusion speed, and long time of manual identification of diseases and pests, a crop disease and pest identification model based on deep learning from the perspective of ecological and environmental protection is proposed. Firstly, crop images are collected by field sampling to collect data set, and image preprocessing is completed by using nearest neighbor interpolation. Then, the network structure of the AlexNet model is improved. By optimizing the full connection layer, different neuron nodes and experimental parameters are set. Finally, the improved AlexNet model is used to identify crop diseases and pests. The experimental analysis of the proposed model based on the constructed data set shows that the average recognition accuracy and recognition time of fragrant pear diseases and insect pests are 96.26% and 321 ms, respectively, which are better than other comparison models. And, the recognition accuracy of this method on other data sets is not less than 91%, which has good portability.
With the continuous advancement of agricultural reform, modern technology is closely related to the development of agriculture. The sustainable development of modern agriculture is no longer limited to the use of natural resources but also includes the understanding and control of information resources . In recent years, the continuous deterioration of the ecological environment makes the ecological structure more fragile; crop diseases and insect pests often have large-scale outbreaks [2, 3]. Frequent outbreaks of crop diseases and insect pests directly affect the quantity and quality of agricultural products, resulting in economic losses . Therefore, it is necessary to study the control of crop diseases and insect pests to avoid unnecessary losses.
The first task of crop diseases and insect pests control is to quickly and accurately identify crop diseases and insect pests, so as to carry out risk assessment and control treatment in the future . Among them, statistical analysis and prediction of the same type of diseases and insect pests through a large amount of case data are also very important . In the past, pest control was usually carried out by manual statistics and analysis, and the relevant technical personnel or agricultural experts relied on experience to determine the type of pest through tedious and repetitive inspection, measurement, and statistical calculation [7, 8]. However, due to the differences in artificial experience and technology, the identification of diseases and pests is not accurate, and there will be some deviations and omissions in the way of data processing, so modern information technology is urgently needed to provide support for it .
After years of exploration, some achievements have been made in the identification of crop diseases and insect pests. Reference  deeply analyzed the identification methods of different crop diseases and explored their control technologies, which provided technical guidance for effectively solving the problems of diseases and pests in the strawberry growth process. Reference  analyzed the new diseases and insect pests in Aquilaria sinensis forest. By comparing the pests and disease types in the plantation, the paper carried out the comprehensive management of the threats of diseases and insect pests, which effectively curbed the evolution of the epidemic situation in the plantation. Machine learning and deep learning algorithms have been applied to the identification of diseases and pests in recent years, such as fuzzy recognition technology, support vector machine, and the traditional backpropagation (BP) neural network . In , a local fuzzy image processing method based on deep learning is proposed to identify crop diseases and insect pests, and the experimental results show that this method is effective. However, the data preprocessing process is complex [14, 15]. In , a detection method of rice plant diseases and insect pests based on deep learning is designed, which improves the detection efficiency by reducing the size of the model. However, it only aims at a single crop, disease, or insect pest and has poor performance in the recognition of a variety of diseases and insect pests. Reference  studied crop diseases and insect pests in New South India and determined the types of diseases and insect pests of Haryana Cypress in southwestern India, which is helpful for the steady progress of follow-up prevention and control work. Reference  uses the Inception-ResNet-v2 model to complete the combined convolution operation to realize the accurate identification of crop diseases and insect pests, but the parameter settings of the original model are complicated, which is not conducive to practical applications. Reference  proposed a video detection architecture based on deep learning to achieve precise detection of plant diseases and insect pests, but the identification efficiency of a variety of plant diseases and insect pests needs to be improved. Reference  proposed an image recognition method for bacteria and fungi detection based on image processing technology, which effectively improves the detection accuracy and efficiency, but this method has high dependence on the original data.
In order to solve the aforementioned problems of complex data preprocessing and low recognition accuracy, a recognition model of crop diseases and insect pests based on deep learning is proposed from the perspective of ecological environment protection. Compared with the traditional model, the innovation of the proposed model is summarized as follows:(1)The size and number of kernel size and step size of the convolution kernel of the AlexNet lead to the overfitting of the network. Therefore, the proposed model retains the first five convolution layers of the AlexNet and removes all the full connection layers to improve the accuracy and efficiency of the model.(2)In order to solve the problem of low accuracy in crop pest identification, the proposed model uses improved AlexNet to extract features, so as to further improve the performance of model recognition.
2. System Flow
The process of identifying crop diseases and insect pests based on convolutional neural networks mainly includes the collection and preprocessing of image data sets, the construction and training of convolutional neural networks, and the verification of the accuracy of neural network. The system flow is shown in Figure 1.
The description of each part of the system is as follows:(1)Data set collection and preprocessing: preprocessing mainly includes four steps: data set optimization, image transformation, image standardization, and data enhancement. Optimizing the data set is mainly to selection of the image data set, thereby reducing the time cost of the experimental process; image transformation is mainly based on the interpolation method to adjust the size of the picture, so as to ensure that the intuitive morphological features, edges, and disease textures of the leaves can be perfectly preserved; image standardization is to obtain the deviation maps of the red, green, and blue channels for each sample of the entire test set and obtain the standardized pictures to improve the accuracy of classification; data enhancement is to randomly sort the read images to ensure that the distribution of some statistical features of the train set and the test set is similar, thereby improving the classification accuracy of the network .(2)Constructing and training a convolutional neural network: this mainly includes four parts: defining the structure of the convolutional neural network, defining the loss function, iterative training, and accuracy assessment. Defining the network structure means defining the algorithm formula of the forward calculation of the neural network; defining the loss function will affect the training speed of the convolutional neural network; the iterative operation uses the backpropagation principle to continuously calculate and update the weights of the convolutional neural network until training ends; accuracy assessment is to evaluate and improve the identification accuracy rate of the convolutional neural network. Here are mainly constructed and trained four convolutional neural network structures: AlexNet, GoogLeNet, improved convolutional networks based on migration learning, and data expansion.(3)Verification of the model accuracy of neural network: a number of comparative experiments are designed to verify the identification accuracy of various convolutional networks. Firstly the AlexNet and GoogLeNet are verified, the data set are divided into train set and test set, the proportion of training sets and the types of diseases and insect pests are changed, the accuracy rates shown by AlexNet and GoogLeNet are observed, and the problems are analyzed . Secondly, the improved convolutional networks based on migration learning are verified. The data set is divided into train set, validation set, and test set. The features extracted from the first six and seven layers of AlexNet are used to train three different structures of the network, compare the identification accuracy, and analyze the results . Finally, the best additive Gaussian noise value and the effect of improved AlexNet based on data expansion are verified by experiments.
3. Data Set Construction and Preprocessing
3.1. Data Set Construction
(1)The data used is from the Yulu Xiangli Experimental Field of Shanxi Agricultural University, and the collection time is concentrated from June to August, 2020. Taking into account the interference of the outdoor environment, the pest leaves picked outdoors are placed in an indoor (temperature 25°C) environment (natural light + fluorescent light) for image shooting and sorting. The shooting equipment is a Sony digital camera, model DSC-WX30 with intelligent autofocus, and the image resolution is 4608 × 3456 pixels. The photo was taken at a distance of 25 to 35 cm from the blade, at 90°C perpendicular to the blade or at a tilt angle of 20°C to 50°C. Insect pests were collected from common pear tree pests. A total of 1,200 images of Yulu fragrant pear leaf pests were selected, including 450 images of scarab pests, 400 images of pear gall insect pests, and 350 images of pear psyllium pests. 900 images were randomly selected for model training, and the remaining 300 images were used for model verification and testing. The images of pests and diseases are shown in Figure 2.(2)To accelerate the calculation efficiency and speed of the model, the oversize area is cropped and compressed and the undersize part is filled before training and testing the model. The processed image resolution is 224 × 224 pixels.
3.2. Data Set Preprocessing
The types of diseases and insect pests of crops are various, and the size of crop leaves is different. For some plants, there are only dozens of pictures of one type of disease or insect pest and no healthy samples as control groups, for which there are no training values. Therefore, it is necessary to optimize the data set and make it suitable for reading in the train set and test set:(1)Optimize the data set: select 42 subsets from the constructed image data set as experimental sub-data sets. The feasibility of the network model and the adjustment of network parameters are verified by some data sets with fewer categories and samples, so as to reduce the time cost of the experimental process.(2)Image transformation: adjust the size of the picture by interpolation and unify the size of each picture to 224 × 224. In this process, a variety of interpolation methods have been tested. By comparison, the nearest neighbor interpolation method has the best scaling effects on the data set, which can retain the intuitive morphological characteristics of the leaves, and the edges and disease textures are well preserved, as shown in Figure 3(a) which is obtained by bilinear interpolation, and Figure 3(b) which is obtained by nearest neighbor interpolation.(3)Image standardization (optional, it has no obvious impacts on identification accuracy on AlexNet): for each sample of the whole test set, the deviation diagrams of red, green, and blue channels can be calculated, respectively, and the standardized pictures can be obtained. When the training sample is large enough, according to the statistical laws, the average values of the train set and the test set converge and are equal.(4)Data enhancement: in order to improve the accuracy of the network, firstly, the read images should be randomly sorted to ensure that the distributions of some statistical features of the train set and the test set are similar. Then set batch size to 12.
After completing the above steps, data sets used for convolutional neural network model training can be obtained. The entire data sets can also be randomly divided into two parts; one part is used for training and the other part is used for testing, and the parameter training rate is set to adjust the ratio of the two parts of the data set.
4. Identification of Crop Diseases and Insect Pests Based on the Improved AlexNet Model
4.1. AlexNet Model
As a classic model of convolutional neural network, AlexNet introduces a rectified linear unit as an activation function and uses dropout to randomly ignore some neurons to prevent overfitting. At the same time, the model uses two GPUs to accelerate the training of neural networks. So, compared to other models, AlexNet performs well in tasks such as image classification and target detection [24, 25]. AlexNet is an 8-layer deep network with 60 million parameters, including 5 convolutional layers and 3 fully connected layers. The structure of the model is shown in Figure 4.
The first layer of the AlexNet model is the convolutional layer. The input image is 224 × 224 × 3, the number of convolution kernels is 96, the size of convolution kernel is 11 × 11 × 3, and the step size is 6. The second convolutional layer needs to take the output of the first convolutional layer as input and use 256 kernels with a size of 5 × 5 × 48 to filter it. The edge is expanded to 2 and the step size is 1. The third, fourth, and fifth convolutional layers are connected to each other. The third convolutional layer has 384 cores of size 3 × 3 × 256, the fourth convolutional layer has 384 cores of size 3 × 3 × 192, and the fifth convolutional layer has 256 cores of size 3 × 3 × 192. The fully connected layers are the sixth, seventh, and eighth layers. The neuron parameters of each layer of the original fully connected layer are 2048, and the last fully connected layer is a classifier with 500 outputs.
By analyzing the structure of the AlexNet model, it can be seen that how to reduce the occurrence of redundancy in the fully connected layer is the key to the experimental optimization of the model structure. Therefore, the experiment will improve the fully connected layer part of the model.
4.2. Improved AlexNet Model
The size, number, and step of the convolution kernel of the AlexNet network have significant impacts on the identification accuracy of the network. In order to avoid overfitting and slowdown in training speed, only the characteristics of the fragrant pear leaf are detected and the structure of the AlexNet network is improved; namely, the first 5 convolutional layers of AlexNet are kept and all fully connected layers are removed [26, 27]. Then, the full connection layer parameters of L6 and L7 in AlexNet are trained for several rounds in turn, and the network recognition accuracy (AP) of L6 and L7 under different parameter settings is compared. The identification accuracy under different parameter settings is shown in Table 1.
It can be seen from Table 2, when the number of nodes in L6 and L7 is 512 and 256, the improved AlexNet has the highest identification accuracy of fragrant pear pests, reaching 96.93%.
The function of convolution layer is to extract image features. By retaining convolution layer and pretraining weight, the network can converge faster and extract target features more easily. Therefore, the proposed model does not change the convolution layer parameters of AlexNet network. Among them, the remaining size of the first layer of AlexNet convolution kernel is 11 × 11 × 3; the size of the second layer of convolution kernel is 5 × 5 × 48, which is consistent with the convolution layer setting of the original network. Therefore, it can be seen that the improved AlexNet used for pest identification includes 5 convolutional layers, 2 fully connected layers, and 1 output layer. Considering that the fragrant pear pest identification is a two-class problem, the last fully connected layer parameter is set to 2. The specific network parameter configuration after network modification and adjustment is shown in Table 2.
4.3. Experimental Parameter Description and Model Training
In the experiment, the data set is divided into three parts: train set, verification set, and test set. The train set size is 900, the verification set is 121, and the test set is 121. In order to avoid memory overflow, batch training method is adopted to set up contrast experiments on training set and verification set of AlexNet model, and the batch size of verification set is synchronized with that of training set. Traversing all training set data once is called one iteration; that is, one iteration will traverse all training set data, not just one batch data. The number of iterations is set to 150, and the evaluation index value of the model is calculated on the test set after each iteration.
The experiment uses the loss function categorical crossentropy in Keras as the cost function, which is defined as follows:where is the number of samples, is the number of categories, and are the output type and the corresponding true type, respectively, and this function is a multioutput loss function.
In order to solve the problem of gradient disappearance and explosion in the process of backpropagation, batch normalization is introduced to standardize the network layer input. The dropout mechanism is used to inhibit the neuron nodes from participating in the backpropagation process with a probability of 0.3. To improve the efficiency of parameter adjustment, the adaptive motion estimation (Adam) algorithm is used to optimize the model, and the initial learning rate is set to 0.00001. In order to save the optimal model parameters, the model checkpoint mechanism is introduced. After each iteration, the current model is saved by observing whether the accuracy of the training set is improved.
5. Experimental Results and Analysis
5.1. Evaluation Index
AP is used to evaluate the identification performance of the improved AlexNet network. AP is the integration of the precision-recall (PR) curve on the basis of the accuracy. The evaluation index is calculated as follows:where is the accuracy rate, is the recall rate, is the number of fragrant pear disease and insect pest leaves that are correctly identified by the algorithm, is the number of fragrant pear disease and pest leaves that are misrecognized, and is the number of fragrant pear disease and pest leaves that are not recognized.
5.2. Model Stability
In order to demonstrate the stability of the proposed model, the accuracy of the model in the 60th iteration in the literature 13, 19, and 20 is compared. The changes are shown in Figure 5.
It can be seen from Figure 5 that, compared with other models, the overall recognition change of the proposed model is the smallest; the recognition accuracy shows an inverse trend as the number of input images gradually decreases, and, compared with the other three models, the accuracy rate of overall recognition is highest. When the number of input images is 128, the change of the broken line tends to be gentle after the 40th iteration of the four models, indicating that the model has entered a stable stage of leaf pest feature extraction. When the number of contrast input images is 64 and 32, respectively, the proposed model is basically stable after the 45th iteration, but the other three models all show a small increase, especially the gradual increase trend of the model in  which is obvious.
Similarly, the change of the loss function on each model verification set during the 60th iteration is shown in Figure 6.
It can be seen from Figure 6 that the loss index of each model decreases as the number of input images decreases each time. By comparing the number of input images, it is found that, compared with the other three models, the proposed model has smaller change range, the lowest loss rate, and higher stability. From the beginning of the iteration to the 15th iteration, the loss value of [13, 19] changes greatly, which indicates that the two models have a high risk of identification error in the early stage, which leads to the instability of model identification.
The change of accuracy and loss function in the process of comprehensive training can be concluded as follows. During the model training process, changing the number of input images will directly affect the model identification accuracy, model identification risk, and the stability of model identification performance. However, the proposed model has the most stable performance compared to other models and its identification accuracy and loss function have the smallest change range, which are both optimal for any number of images.
5.3. Comparison of Identification Results of Different Network Models
The proposed model and the models in [13, 19, 20] are used to crosstrain the training set, respectively. The recognition accuracy of the four models for three different pest images is shown in Table 3.
It can be seen from Table 3 that the proposed model has a significantly higher identification accuracy for the three types of sample images than other comparison models, and its average identification accuracy is 96.26%, which is higher 7.93%, 6.96%, and 4.37% than that in [13, 19, 20], respectively. Because the image of beetle pests is easy to recognize, the overall identification accuracy is relatively high, while the other two pests are similar in color to the leaves and have smaller targets, so the overall identification accuracy is low. Reference  uses convolutional neural network for local fuzzy image processing to realize crop pests and diseases identification. The model of this paper is more traditional, and the performance advantage is not obvious, so the recognition accuracy is only 89.19%. And the normalization segmentation algorithm based on spectral graph theory is more complex and time-consuming. Reference  builds a video detection model based on the faster-RCNN network, which can effectively identify plant diseases and insect pests. But it lacks reliable data preprocessing, affecting the identification accuracy. The recognition performance of the disease and insect pests is improved by combining the image processing technology and machine learning technology in . However, the performance advantage of machine learning is not obvious, the recognition accuracy is 92.23%, and the time consumption is 369 ms. The proposed model uses an improved AlexNet network to identify pests and diseases. The simplified network shortens the identification time to 321 ms and combines with effective image preprocessing technology at the same time guaranteeing the identification accuracy.
5.4. Analysis of the Test Results of the Model on Other Data Sets
In order to demonstrate the generalization ability of the proposed model, it is analyzed on other data sets. The results are shown in Table 4.
It can be seen from Table 4 that, compared with the 91.88% and 92.64% accuracy rates of the corn and yellow peach leaf data sets, the proposed model obtains 96.75% and 95.31% recognition accuracy rates on the rice and potato leaf data sets, respectively.
The task of leaf classification is to classify the crops by the characteristics of leaves after they are infected by diseases and pests. Image recognition of the same crop leaves is still a fine-grained classification task. The AlexNet model is trained without using maize and yellow peach leaf data sets, and the model achieves higher classification accuracy on other crop data sets, which indicates that the proposed model has good migration for some fine-grained classification tasks of crop leaf data sets.
The traditional methods of artificial identification and machine learning for crop diseases and insect pests identification have complex data preprocessing, and the fitting degree of the model fluctuates greatly due to the advantages and disadvantages of the features, so the recognition effect is poor. Therefore, this paper proposes a recognition model of crop diseases and insect pests based on deep learning from the perspective of ecological environment protection. This paper improves the full connection layer of AlexNet network and uses the improved network to analyze the preprocessed crop image set, so as to realize the recognition of crop diseases and pests. The experimental results show that the proposed model performs best when the number of input images changes, and its average recognition accuracy and recognition time for pear diseases and pests are 96.26% and 321 ms, respectively, which are better than other comparison models. At the same time, when the proposed model is applied to other data sets, the recognition accuracy is not less than 91%, and the loss rate is not more than 0.320, which can provide some technical support for the decision-making of crop pest control.
In the research of crop pest identification, there are still many areas worthy of further research, such as how to quickly calculate the effective area of disease and judge the severity of disease and insect pests in a region, so as to carry out effective treatment and prevent large-scale economic losses. These problems are still urgent problems to be solved in the pest management work, which is also the next key research content.
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding the publication of this paper.
T. Yang and C. Liu, “Recognition system for leaf diseases of ophiopogon japonicus based on PCA-SVM,” Plant Diseases and Pests, vol. 11, no. 2, pp. 11–15, 2020.View at: Google Scholar
M. Long, C. Ouyang, H. Liu, and Q. Fu, “Image recognition of Camellia oleifera diseases based on convolutional neural network & transfer learning,” Nongye Gongcheng Xuebao/Transactions of the Chinese Society of Agricultural Engineering, vol. 34, no. 18, pp. 194–201, 2018.View at: Publisher Site | Google Scholar
M. Ranjith, D. R. Bajya, T. Manoharan, and R. S. Ramya, “Biodiversity of insect pests and natural enemies affiliated with wheat (Triticum aestivum) ecosystem in Haryana,” Indian Journal of Agricultural Sciences, vol. 88, no. 1, pp. 157-158, 2018.View at: Google Scholar
D. Morankar, D. M. Shinde, and S. R. Pawar, “Identification of pests and diseases using alex-net,” SSRN Electronic Journal, vol. 7, no. 4, pp. 53–61, 2020.View at: Google Scholar
L. Yang, L. Yang, L. Li, R. S. Du, and H. Dong, “Integrated prevention and control technology of major diseases and insect pests of strawberry,” Plant Diseases and Pests, vol. 11, no. 2, pp. 33–36, 2020.View at: Google Scholar
M. Keiichiro, “Factors leading insect pest outbreaks and preventive pest management: a review of recent outbreaks of forage crop pests in Japan,” Japanese Journal of Applied Entomology and Zoology, vol. 62, no. 3, pp. 171–187, 2018.View at: Google Scholar
A. Ferreira, S. C. Felipussi, R. Pires et al., “Eyes in the skies: a data-driven fusion approach to identifying drug crops from remote sensing images,” Ieee Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 12, pp. 4773–4786, 2019.View at: Publisher Site | Google Scholar
R. K. Gaur, M. Kumar, S. Sharma, and B. S. Yadav, “Survey studies on insects and non insect pest associated with ber crop in South West Haryana,” Journal of Entomology And Zoology Studies, vol. 8, no. 2, pp. 856–863, 2020.View at: Google Scholar
G. Jayme, V. K. Luciano, H. V. Bernardo et al., “Annotated plant pathology databases for image-based detection and recognition of diseases,” IEEE Latin America Transactions, vol. 16, no. 6, pp. 1749–1757, 2018.View at: Google Scholar