Advanced Optimization Models For Smart City ApplicationsView this Special Issue
CRUN-Based Leaf Disease Segmentation and Morphological-Based Stage Identification
Natural growth is eliminated by the process of globalization in today’s globe owing to the development of technology and landscapes. The majority of today’s youngsters, as well as our seniors, lack an appropriate understanding of natural species such as plant names, tree names, and medicinal plants. This is attributed to technological advancements and a decline in gardening interest. To close this gap, horticulture may employ technology that aids in the improvement of plant understanding and growth. This method is implemented in the existing system for diagnosing leaf diseases using image processing and machine learning techniques. In the existing process, the classification of leaf disease is performed using image processing steps, such as preprocessing, segmentation, feature extraction, feature reduction, and classification. Even though it utilizes multiple processing steps and region-based classification, it identifies only the type of disease. In this paper, a combined approach of regional-based convolutional neural networks and U-Net (CRUN) is proposed for segmenting the leaf diseases from the augmented leaf dataset. Then, the segmented images are subjected to a morphological process to identify the level of disease in the leaf. This identification helps to identify the leaf’s nature and suggests a process to reduce the disease’s spread to other leaves through the proper use of fertilizers. The proposed method is applied to real-time images of sugarcane leaf diseases, such as bacterial blight and red rot, and banana leaf diseases, such as yellow and black sigatoka. This method is also applied to public sugarcane and banana leaf datasets from the Kaggle website. The proposed CRUN algorithm effectively segments the disease region. The morphological process helps to identify the disease level and protect the plant from further spread of disease. As a result, the proposed CRUN and morphological tests are most effective for automating leaf disease detection and prevention.
Technology has advanced in every industry in today’s society. The user’s cell phone aids them in acquiring access to the objects around them. Researchers have developed an easy-to-use method for analyzing leaf diseases using image processing technology. This might be included as mobile apps in the future. To diagnose leaf disease, a combination of deep learning and morphological processes is proposed in this work. The next sections go through some of the more current ways of identifying leaf diseases.
A segmentation-based alfalfa leaf disease classification was proposed in . Here, they utilized the K-nearest neighbor (KNN) based ranking algorithm called Relief was used for the feature selection process. Then, the selected features were classified with the help of a support vector machine (SVM) with an accuracy of eighty percent. A manual region of interest-based leaf disease classification was proposed in . Here, they resized the original image into 256 rows and columns. For that, the leaf disease region was segmented manually, and then convolutional neural networks were used for the classification process.
A specialized classification process for soy leaf disease was proposed in . Here, they performed segmentation and feature extraction on soy leaf that was collected at various height levels from 1 to 16 m with desired intervals. The SVM and KNN were used for the classification of the extracted characteristics from different soy leaves. In particular, this approach produced good results in foliar diseases.
An image transformation based on cucumber leaf diseases classification was proposed in . Here, they utilized LAB colour space for segmentation and feature extraction instead of the regular RGB colour space. Traditional algorithms such as k-means clustering were used for segmentation, and then the segmented images were subjected to singular value decomposition-based feature reduction in order to reduce the textural features. Then, the classification was performed using an SVM classifier.
A deep learning CNN algorithm was proposed in  for rice leaf disease classification. They achieved accuracy above ninety-five percent with the help of a 10-fold cross-validation technique. A survey paper was analyzed for leaf stress analysis using multiband images. The term multiband image is the hyperspectral image and the technique used for the analysis was also discussed in .
A segmentation-based leaf diseases classification was proposed in . The traditional algorithms such as simple iterative clustering were used for segmentation, and those segmented images were subjected for feature extraction to obtain colour- and texture-related features. Then, the classification was performed by using an SVM classifier.
In the above articles, either traditional feature reduction like ranking algorithms, cross-validation, and singular value decomposition or principal component analysis were used. These approaches were based on the manual threshold or feature selection approach. A new approach called the automated approach for feature selection uses metaheuristic algorithms by solving the fitness functions in recent years.
Based on this metaheuristic approach, in  a bacterial foraging algorithm was used for selecting the optimal characteristics for fungal leaf disease classification. Here, the reduced characteristics were classified using a neural network with radial basis activation function. In , we also utilized the CNN for classification, but they have performed their classification on pretrained apple leaves of four diseases, and it enhanced the accuracy of the former Alexnet by ten percent. Most researchers in recent years utilized the CNN deep learning algorithm for leaf disease classification .
The paper is organized in the following manner. Section 2 highlights the recent works on leaf diseases. Sections 3 and 4 describe the proposed method and then discuss the results. Finally, the paper summarized the proposed method advantages in Section 5, and Section 6 proposed the future work.
2. Literature Survey
The authors in  investigated numerous parameters for the categorization of soy leaf diseases. The results revealed that images were taken at 1 m and 2 m, as well as colour and texture, features using convolutional neural networks, performed better in detecting foliar illnesses in soybean leaves than prior SVM and KNN classification.
The article from  enhanced the convolutional neural network for predicting crop diseases. It improved the feature set by including squeezing and excitement. As a result, it obtained 91 percent accuracy. However, because it does not compare any multiclass categorization, its result is ineffective.
The authors in  surveyed the importance of leaves in plant disease detection and explored the various methodologies and attributes utilized in crop diagnosis of diseases. Dhingra et al.  suggested a novel approach for classifying leaf diseases. The technique is based on colour, shape, and the histogram.
For plant leaf disease classification, an optimized and nine-layer CNN by . It attained an accuracy of 96.11 percent. By supplementing the dataset in six different ways and tweaking the batch size, epoch, and iteration, the efficiency is increased.
A deep learning strategy to classify soybean plant disease . It encompasses the sixteen disease classes in the soy crop and has an accuracy of 98.14 percent. However, it was built specifically for a single leaf to complete this categorization. Chen et al.  used a combination of approaches to enhance tomato leaf disease detection. Despite using various methodologies, it only obtained eighty-nine percent accuracy for tomato leaf diseases. The artificial bee colony technique was employed by researchers to determine the best threshold for segmenting the leaf areas.
A basic technique for groundnut leaf disease categorization was discussed in . Along with the late blight spot, it was able to categorise five main leaf diseases. Altogether, it classified these disorders with 97.11 percent accuracy. CNN was used to extract features and an SVM was used to identify pathogens in rice crops .
The olive leaf disease classification uses the transfer learning principle in a convolutional neural network . The best results were achieved with ADAM and SGD optimization in VGG-16 and 19 CNN. However, various leaves necessitate a lengthy processing period.
For segmentation, U-Net is utilized  to exclude complicated backgrounds in terms of reducing the effect of complex backgrounds on identification results.
Few other networks handle segmentation better. U-Net++ [22, 23] connected the encoder and decoder via densely layered skip connections. It is more effective than U-Net, but its massive intermediate convolutions are expensive.
In , DIResUNet uses modified ResUNet blocks and inception modules with a dense and global spatial pyramid pooling (DGSPP) block, structurally similar to UNets but providing higher extraction performance and good generalization ability for multiclass semantic segmentation of HRRS pictures.
Most deep convolutional neural networks were used to identify and classify plant diseases based on their probability values. Pixel-level segmentation quantifies plant disease severity, which helps calculate pesticide dosage. In , the authors have presented deep learning to detect and measure grey mould illness in strawberry plants.
In the existing process, the classification of leaf diseases is performed using image processing steps such as preprocessing, segmentation, feature extraction, feature reduction, and classification. Even though it utilizes multiple processing steps to fine-tune the layers in neural networks  or region-based classification or deep learning algorithms, it identifies only the type of disease.
The use of deep learning networks in phytopathology is on the rise due to the abundance of labelled leaf images and computationally efficient hardware [27—30]. Several open-source datasets exist. However, most were collected in a lab with a similar background and light illumination. Field conditions are opposite. A cluttered background, occlusion, and uneven illumination all influence image quality. It is one of the key reasons for poor model performance, trained on controlled environment images and assessed on-field images.
Hence, the proposed model is used for segmenting leaf diseases from the augmented leaf dataset. Then, the segmented images were subjected to a morphological process to identify the level of disease in these leaves. This identification helps to identify the leaf’s nature and suggests a process to reduce the disease’s spread to other leaves through the proper use of fertilizers.
3. Proposed Method
In this paper, a combination of deep learning and morphological processing is proposed to identify leaf disease stages and preventative actions in banana and sugarcane leaves. Figure 1 depicts the steps of the proposed method in a graphical format.
The following steps are a short description of the CRUN-morphological-based leaf disease level identification.(1)The input dataset comprises both real-time and public datasets of banana and sugarcane leaves.(2)Both datasets are subjected to a filtering process using a median filter. Here, the filtering process is carried out on the colour space transformed image.(3)The preprocessed and raw images are subjected to scaling, rotation, shifting, noise addition, and mirror-image formation to produce a new training dataset using the augmentation process for enhancing the deep learning-based segmentation process.(4)The augmented images are subjected to the diseased part segmentation in the leaf using the CRUN approach.(5)Here, in CRUN, first, the region-based convolution neural network is performed to segment the leaf foreground and background image.(6)The U-Net is used to segment the diseased part from the leaves.(7)Using this CRUN approach, the diseased part of the leaf was extracted.(8)After the extraction of the diseased part, morphological operation is applied to the region to extract the exact diseased region.(9)Finally, the bwarea parameter is used to estimate the diseased part.(10)The differences between the original and diseased part pixel values are calculated.(11)The level of disease was estimated using the difference value.(12)The CRUN performance is evaluated in terms of classification performance metrics.
In this approach, three image preprocessing steps are performed on the input images before the augmentation process. The three preprocessing steps are as follows:(i)Image resizing: both datasets have different image sizes and it takes higher computational time for the larger size. To overcome this problem, the images are resized to 2.66 inches in height and width for further processing. This resizes operation on the image is the first preprocessing step in this approach.(ii)Image conversion: in this step, the resized image is subjected to the colour channel transformation process. Here, the image is from three input RGB channels to a single grey channel using The subscripts in the above equation denote the channel and the term I denotes the corresponding pixel values.(iii)Filtering: this is the final step in the preprocessing of the proposed method. Here, the output image from the image conversion process is subjected to filtering to remove any artefacts that arise during the above preprocessing steps.
Here, the median filtering is carried out using the steps in  with 33 as the window length. The steps in the median filter are described in the following example.(1)The greyscale image pixel value is shown in Figure 2.(2)Arrange the pixel values in ascending order that comes under 33 window length 121, 122, 124, 125, 131, 144, 151, 152, and 163.(3)Calculate the median value for the step 2 output. Replace the first-pixel value in step 2 with this median value.(4)Repeat steps 2 and 3 for the complete image and the whole dataset.
By using the filtered greyscale value thrice, the RGB filtered image is formed. With these three processes, the basic augmentation process was carried out on input images.
3.2. Extended Dataset Using Augmentation Process
In literal meaning, the term augmentation refers to an increase in size. Here, the dataset is increased by performing the following actions as in Table 1.
Using this process, the input images in the dataset are increased along with the preprocessed and original images.
3.3. Region-Based Convolutional Neural Network (RCNN)
In this step, the regions in the image like the leaf (foreground) and background parts like black or white colour and other regions of the leaf image are segmented using RCNN.
The layers in the RCNN structure are similar to the layers Alexnet, as shown in Figure 3.
Similar to Alexnet, RCNN also has an input layer, convolution layer, and max-pooling layer. However, it has a bounding box and an SVM output as final layers in RCNN instead of the softmax and is fully connected, as shown in Figure 4. The greatest advantage of RCNN is that all network layers can be upgraded during training. Caching requires no disc space. In addition, it provides high detection quality.
Using this bounding box, the leaf images are segmented to detect the leaf and diseased regions from the whole image. The bounding box indicates the different regions like foreground, leaf, background colour, and colour variation in the image. Then, this bounding box output is subjected to U-Net for the final segmentation of the diseased region.
3.4. Diseased Region Extraction Using U-Net
The foreground and colour variation regions from RCNN outputs are subjected to U-Net for the final diseased region classification process. Even though the RCNN and U-Net’s objective is to segment the diseased region, this architecture does not utilize these two softmax and fully connected layers in their architecture.
As per the  process, U-Net segments the diseased region from the bounded region outputs.
The name U-Net comes from its architectural shape because the architecture is in the form of a U-shape with more number convolution and max-pooling layers, as shown in Figure 5. Based on the medical image segmentation properties of U-Net, this is used for identifying the diseased region from the bounding box RCNN output. For both RCNN and U-Net, the image size is kept as [256 256].
Even though the RCNN and U-Net’s objective is to segment the diseased region, this architecture does not utilize these two softmax and fully connected layers in their architecture. Instead of these layers, it highly utilizes the convolution and max-pooling layers.(1)The phrase up refers to upsampling the image by ratio 2(2)The word down refers to downsampling the pixel values by ratio 2(3)The name conv3 refers to performing convolution in all trio image channels using the [128 128] convolution filter(4)The term concatenate refers to the act of copying the preceding outcome and combining it with the present state(5)The phrase max-pool refers to the process of normalizing the previous stage results
By using the method described above, the picture is first downsampled before being convoluted with a convolution layer. The convoluted outputs are then upsampled, and convolution filtering is used to partition the images into healthy and unhealthy regions. To segment the diseased region in the leaf, the aforementioned U-Net architecture approach is used for both the enhanced and original datasets.
CRUN performances are evaluated using the traditional classification performance metrics as in Table 2.
3.6. Morphological Processing
After segmenting the diseased region, the morphological process like top hat and bottom hat transformation is performed on the segmented region to extract the exact diseased region. For both the transformations, the structural element disk is used with size 12. The overall morphological process formula for extracting the leaf region is represented in
Here, denotes the morphological output from the difference of the top and bottom hat of U-Net output image . The sample operation of the proposed morphological operation is shown in Figure 6.
From the morphological output area, the area covered by black and white pixels was measured using bwarea properties. With this information, the identification of disease levels in the leaf region is performed as per Table 3.
At these levels, the leaves in mild and moderate regions can be protected from proper fertilizers to prevent the disease from spreading in the crop. The fertilizer in Table 4 is chosen based on the corresponding deficiency of nutrition in the plant.
4. Experimental Results and Discussion
In this, the proposed CRUN-based morphological for disease identification in leaves was implemented using MATLAB R2021a under a Windows 10 environment.
In this paper, both the real and public databases are used for the analysis. But in both the databases, the sugarcane and banana leaves are used for the analysis.
4.1. Real-Time Dataset
The real-time dataset is collected using a Nikon D5300 camera. The conditions for capturing the images which the camera should automatically adjust the flashlight and there is no bias correction to the images. The image resolution is set as 96 dpi for 5.3-inch height and 5.3-inch width.
Using these conditions, three major categories of banana and sugarcane leaf images are captured. The three major categories of banana leaves are healthy (HB), yellow sigatoka (YS), and black sigatoka (BS). In sugarcane, the three major categories are healthy (HS), bacterial blight (BB), and red rot (RR). Table 5 gives the image count in each category in the real-time dataset.
In banana leaves, it has 3510 images from all three categories. Similarly, sugarcane has 3126 images from all three categories.
4.2. Public Dataset
In this, both the banana and sugarcane leaf datasets are taken from the Kaggle website [33, 34].
In bananas, it has three major categories: healthy (HB), Pestalotiopsis leaf blight (PLB), sigatoka (S), and cordana (CB) disease. In sugarcane, the three major categories are healthy (HS), bacterial blight (BB), and red rot (RR). Table 6 gives the image count in each category in the public dataset.
Totally, banana leaves have 935 images from all three categories. Similarly, sugarcane leaves has 300 images from all three categories. Both these dataset images were subjected to preprocessing and segmentation, for extracting the diseased region from the leaf. The parameters used for evaluating the performance of the proposed model are listed in Table 7.
The sample input images for the real-time and public banana dataset are shown in Figure 7.
Similarly, the sample sugarcane leaf image from the dataset is shown in Figure 8.
The sample preprocessed greyscale output for the unhealthy leaves is shown in Figure 9.
The preprocessed images are subjected to augmentation and then subjected to CRUN architecture to segment the diseased region from the leaf.
Using the architecture in Figure 10, the leaves get segmented and the segmented diseased region using CRUN is shown in Figure 11.
Using Table 2, the CRUN performance is evaluated for banana and sugarcane. The results are tabulated from Tables 8–10.
Tables 8 to 10 prove that the proposed CRUN is the best for segmenting the diseased region, and the pictorial representations of the comparative results are shown in Figures 12—14. Then, the segmented image is subjected to the morphological process on the cropped image using the bounding box as in Figures 15.
Then, the cropped image is subjected to morphological operation with a disk element size of 12 and followed by a binarization process to estimate the level of disease. The corresponding output is shown in Figure 16.
The white region indicates the diseased part and the black region indicates the healthy part. Then, the area covered by black and white pixels was estimated for analyzing the level of disease. Based on their level, the actions in Table 4 will be taken.
Therefore, the proposed approach is suitable for predicting the level of disease and the corresponding fertilizers can be used.
In this paper, the proposed CRUN approach is used for identifying the segmentation of diseased parts in the leaf. The proposed morphological operation is best for identifying the level of disease in the leaf. Finally, the area estimation on the pixels helps to identify the severity level of the leaf and performs the necessary action on the diseased region. Therefore, the proposed CRUN-based morphological approach is suitable for predicting the level of disease and the usage of fertilizers.
6. Future Works
In future, the proposed method can be further enhanced by using the improved morphological process.
The data can be shared by the authors J. Sujithra and M. Ferni Ukrit, if it is requested. The data are not publicly available due to privacy concerns.
Not applicable as no human or animal sample was involved in this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
F. Qin, D. Liu, B. Sun, L. Ruan, Z. Ma, and H. Wang, “Identification of alfalfa leaf diseases using image recognition technology,” PLoS One, vol. 11, no. 12, Article ID e0168274, 2016.View at: Publisher Site | Google Scholar
S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, and D. Stefanovic, “Deep neural networks based recognition of plant diseases by leaf image classification,” Computational Intelligence and Neuroscience, vol. 2016, Article ID 3289801, 11 pages, 2016.View at: Publisher Site | Google Scholar
N. A. D. S. Belete, D. A. Guimaraes, and H. Pistori, “Identification of soybean foliar diseases using unmanned aerial vehicle images,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 12, pp. 2190–2194, 2017.View at: Publisher Site | Google Scholar
S. Zhang, X. Wu, Z. You, and L. Zhang, “Leaf image based cucumber disease recognition using sparse representation classification,” Computers and Electronics in Agriculture, vol. 134, pp. 135–141, 2017.View at: Publisher Site | Google Scholar
Y. Lu, S. Yi, N. Zeng, Y. Liu, and Y. Zhang, “Identification of rice diseases using deep convolutional neural networks,” Neurocomputing, vol. 267, pp. 378–384, 2017.View at: Publisher Site | Google Scholar
A. Lowe, N. Harrison, and A. P. French, “Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress,” Plant Methods, vol. 13, no. 1, pp. 80–12, 2017.View at: Publisher Site | Google Scholar
Y. Sun, Z. Jiang, L. Zhang, W. Dong, and Y. Rao, “SLIC_SVM based leaf diseases saliency map extraction of tea plant,” Computers and Electronics in Agriculture, vol. 157, pp. 102–109, 2019.View at: Publisher Site | Google Scholar
S. S. Chouhan, A. Kaul, U. P. Singh, and S. Jain, “Bacterial foraging optimization based radial basis function neural network (BRBFNN) for identification and classification of plant leaf diseases: an automatic approach towards plant pathology,” IEEE Access, vol. 6, pp. 8852–8863, 2018.View at: Publisher Site | Google Scholar
B. Liu, Y. Zhang, D. He, and Y. Li, “Identification of apple leaf diseases based on deep convolutional neural networks,” Symmetry, vol. 10, no. 1, p. 11, 2017.View at: Publisher Site | Google Scholar
X. Chao, G. Sun, H. Zhao, M. Li, and D. He, “Identification of apple tree leaf diseases based on deep learning models,” Symmetry, vol. 12, no. 7, p. 1065, 2020.View at: Publisher Site | Google Scholar
E. C. Tetila, B. B. Machado, G. K. Menezes et al., “Automatic recognition of soybean leaf diseases using UAV images and deep convolutional neural networks,” IEEE Geoscience and Remote Sensing Letters, vol. 17, no. 5, pp. 903–907, 2020.View at: Publisher Site | Google Scholar
J. Hang, D. Zhang, P. Chen, J. Zhang, and B. Wang, “Classification of plant leaf diseases based on improved convolutional neural network,” Sensors, vol. 19, no. 19, p. 4161, 2019.View at: Publisher Site | Google Scholar
S. Kaur, S. Pandey, and S. Goel, “Plants disease identification and classification through leaf images: a survey,” Archives of Computational Methods in Engineering, vol. 26, no. 2, pp. 507–530, 2019.View at: Publisher Site | Google Scholar
G. Dhingra, V. Kumar, and H. D. Joshi, “A novel computer vision based neutrosophic approach for leaf disease identification and classification,” Measurement, vol. 135, pp. 782–794, 2019.View at: Publisher Site | Google Scholar
A. P. Goel, “Identification of plant leaf diseases using a nine-layer deep convolutional neural network,” Computers & Electrical Engineering, vol. 76, pp. 323–338, 2019.View at: Publisher Site | Google Scholar
A. Karlekar and A. Seal, “SoyNet: soybean leaf diseases classification,” Computers and Electronics in Agriculture, vol. 172, Article ID 105342, 2020.View at: Publisher Site | Google Scholar
X. Chen, G. Zhou, A. Chen, J. Yi, W. Zhang, and Y. Hu, “Identification of tomato leaf diseases based on combination of ABCK-BWTR and B-ARNet,” Computers and Electronics in Agriculture, vol. 178, Article ID 105730, 2020.View at: Publisher Site | Google Scholar
K. Suganya Devi, P. Srinivasan, and S. Bandhopadhyay, “H2K - a robust and optimum approach for detection and classification of groundnut leaf diseases,” Computers and Electronics in Agriculture, vol. 178, Article ID 105749, 2020.View at: Publisher Site | Google Scholar
F. Jiang, Y. Lu, Y. Chen, D. Cai, and G. Li, “Image recognition of four rice leaf diseases based on deep learning and support vector machine,” Computers and Electronics in Agriculture, vol. 179, Article ID 105824, 2020.View at: Publisher Site | Google Scholar
S. Uğuz and N. Uysal, “Classification of olive leaf diseases using deep convolutional neural networks,” Neural Computing & Applications, vol. 33, no. 9, pp. 4133–4149, 2021.View at: Publisher Site | Google Scholar
G. Hu and M. Fang, “Using a multi-convolutional neural network to automatically identify small-sample tea leaf diseases,” Sustainable Computing: Informatics and Systems, vol. 35, Article ID 100696, 2022.View at: Publisher Site | Google Scholar
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.View at: Publisher Site | Google Scholar
S. M. M. Kahaki, M. Jan Nordin, A. H. Ashtari, and S. J. Zahra, “Deformation invariant image matching based on dissimilarity of spatial features,” Neurocomputing, vol. 175, pp. 1009–1018, 2016.View at: Publisher Site | Google Scholar
S. Lal, J. Nalini, C. S. Reddy, and F. Dell’Acqua, “DIResUNet: Architecture for Multiclass Semantic Segmentation of High Resolution Remote Sensing Imagery Data,” Applied Intelligence, pp. 1–21, 2022.View at: Google Scholar
A. Bhujel, F. Khan, J. K. Basak et al., “Detection of gray mold disease and its severity on strawberry using deep learning networks,” Journal of Plant Diseases and Protection, vol. 129, pp. 1–14, 2022.View at: Publisher Site | Google Scholar
M. Turkoglu, B. Yanikoğlu, and D. Hanbay, “PlantDiseaseNet: convolutional neural network ensemble for plant disease and pest detection,” Signal, Image and Video Processing, vol. 16, no. 2, pp. 301–309, 2022.View at: Publisher Site | Google Scholar
M. Brahimi, M. Arsenovic, S. Laraba, and S. Sladojevic, Deep Learning for Plant Diseases: Detection and Saliency Map Visualisation Deep Learning for Plant Diseases: Detection and Saliency Map Visualization, Springer, Cham, The Capital Region of Denmark, 2018.
J. Chen, J. Chen, D. Zhang, Y. Sun, and Y. A. Nanehkaran, “Using deep transfer learning for image-based plant disease identification,” Computers and Electronics in Agriculture, vol. 173, Article ID 105393, 2020.View at: Publisher Site | Google Scholar
K. P. Ferentinos, “Deep learning models for plant disease detection and diagnosis,” Computers and Electronics in Agriculture, vol. 145, pp. 311–318, 2018.View at: Publisher Site | Google Scholar
S. P. Mohanty, D. P. Hughes, and M. Salathé, “Using deep learning for image-based plant disease detection,” Frontiers of Plant Science, vol. 7, p. 1419, 2016.View at: Publisher Site | Google Scholar
P. M. Narendra, “A separable median filter for image noise smoothing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-3, no. 1, pp. 20–29, 1981.View at: Publisher Site | Google Scholar
A. G. Smith, J. Petersen, R. Selvan, and C. R. Rasmussen, “Segmentation of roots in soil with U-Net,” Plant Methods, vol. 16, no. 1, pp. 13–15, 2020.View at: Publisher Site | Google Scholar