Journal of Sensors

Journal of Sensors / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 8857931 | https://doi.org/10.1155/2021/8857931

Lei Sun, Xiaofei Fan, Sheng Huang, Shuangxia Luo, Lili Zhao, Xueping Chen, Yi He, Xuesong Suo, "Research on Classification Method of Eggplant Seeds Based on Machine Learning and Multispectral Imaging Classification Eggplant Seeds", Journal of Sensors, vol. 2021, Article ID 8857931, 9 pages, 2021. https://doi.org/10.1155/2021/8857931

Research on Classification Method of Eggplant Seeds Based on Machine Learning and Multispectral Imaging Classification Eggplant Seeds

Academic Editor: Eduard Llobet
Received16 Apr 2020
Accepted12 Aug 2021
Published14 Sep 2021

Abstract

In this study, eggplant seeds of fifteen different varieties were selected for discriminant analyses with a multispectral imaging technique. Seventy-eight features acquired with the multispectral images were extracted from individual eggplant seeds, which were then classified using SVM and a one-dimensional convolutional neural network (1D-CNN), and the overall accuracy was 90.12% and 94.80%, respectively. A two-dimensional convolutional neural network (2D-CNN) was also adopted for discrimination of seed varieties, and an accuracy of 90.67% was achieved. This study not only demonstrated that multispectral imaging combining machine learning techniques could be used as a high-throughput and nondestructive tool to discriminate seed varieties but also revealed that the shape of the seed shell may not be exactly the same as the female parents due to the genetic and environmental factors.

1. Introduction

Discrimination among different seed varieties is important for species registration, intellectual properties of plant breeders, and development of new varieties on the market [1]. Since most crops originated from seeds, the breeding of varieties and the quality of seeds directly affect the yield. Eggplant (Solanum melongena L.) is an important vegetable species planted all over the world. In some regions, eggplant seeds of a certain variety to be sold are mixed with fake seeds or seeds of other varieties, which has a negative impact on seed markets. In addition, seed mixing between varieties may affect actual production processes, which complicates seed classification and reduces crop yield.

The traditional method of sorting seeds typically relies on manual inspection [2, 3], which is inefficient and subjective, as the inspection procedures are time-consuming and require experienced seed analysts. Researchers also used chemical classification markers and multivariate analysis techniques to discriminate eggplant seeds [4]. They have clearly shown that it is difficult to identify plant accessions. The development of a noninvasive, rapid, and reliable technique for identifying and distinguishing the purity of varieties has boundless advantages [5].

The multispectral and hyperspectral images contain both morphological and spectral information. The difference between the regular RGB color imaging and the spectral imaging is that the latter can be used to identify information that is invisible to human eyes. Multispectral and hyperspectral imaging have also been widely used in seed research, such as predicting the viability and vigor of seeds [6], maize seed defect classification [7], tomato seed cultivar classification [8], and maize seed variety classification [9]. Orrillo et al. used near infrared hyperspectral imaging to identify black pepper adulterated with common adulterant papaya seeds, and the results indicated that partial least square regression preprocessed with standard normal variates plus the 2nd derivatives presented the best prediction capability [10].

In recent years, deep learning techniques have been developing rapidly. For example, some search engines, recommendation systems, and image recognition and speech recognition systems have adopted deep learning techniques and achieved decent results [11]. As the performance of the GPU and the power of parallel computing continue to improve, it is possible to process graphical data in real time. Excellent results have been achieved with CNN for image recognition. Researchers have applied CNN to spectral images, and it has been widely used in agriculture. Park et al. developed an approach for diagnosis of Marssonina blotch by monitoring hyperspectral images of apple leaves [12]. Zhao et al. proposed a superpixel-based multiple local convolution neural network (SML-CNN) model for panchromatic and multispectral image classification [13].

As we know, CNN has been widely used in images. It also has a remarkable performance in processing data. Wei et al. used CNN for hyperspectral imaging classification and achieved decent results [14]. Qiu et al. applied CNN on spectral data of rice seeds and achieved accurate classification of rice seed varieties [15]. Levent proposed an adaptive implementation of 1D-CNN for bearing health monitoring and demonstrated that the reduced computational complexity is achieved without a compromise in fault detection accuracy [16].

The purpose of this study was to classify fifteen varieties of eggplant seeds by image recognition and feature extraction. A 2D-CNN was used to classify the images. We also extracted seventy-eight image features from the multispectral images, and a 1D-CNN was used to find classification criteria based on the extracted features. As a traditional machine learning algorithm, a Support Vector Machine (SVM) was employed to compare the performance of CNNs.

2. Materials and Methods

2.1. Image Acquisition Device

The image acquisition device, VideometerLab 4 (VM) [17], is shown in Figure 1(a). The VM is equipped with nineteen LEDs. Each LED emits light with a designated centered wavelength. The instrument acquires multispectral images of nineteen bands with a spatial resolution of . Each pixel represents the spectral reflectance ranging from ultraviolet to near infrared (365~970 nm). The seeds can be easily segmented from the images because of the color contrast between the seeds and the blue background [18]. The image processing procedures are completed using the VideometerLab software. MATLAB (2018a, MathWorks, Natick city, MA, USA) was used to develop the classification model.

2.2. Sample Preparation and Image Segmentation

Fifteen varieties of eggplant seeds (including 17-5, 17-12, 17-14, 17-15, 17-24, 17-25, 17-26, 17-38, 17-39, 17-41, 17-49, 17-52, 17-53, 17-54, and 17-55) harvested in 2017 were used in this experiment. All seeds were cultivated by the Provincial Key Laboratory of Hebei Agricultural University. A random number of seeds were placed in a petri dish with a diameter of 9 centimeters for image acquisition (Figure 1(b)). The total number of seeds was 2872, from which 20% of the seeds were randomly selected as test sets and 10% were selected as validation sets, and the rest of the samples were used as training sets. Table 1 shows the numbers of seeds for each variety.


VarietiesTraining setTesting setValidation set

17-51013015
17-121193417
17-141905527
17-151534629
17-24932714
17-252045829
17-262126130
17-381384020
17-391103116
17-411243518
17-49992814
17-52972814
17-531564422
17-541073115
17-55992814

The Otsu method [19] was used to obtain the binary images, and a series of morphological operations were performed to remove noise in the background. A watershed algorithm was used for image segmentation. Figure 1(c) shows the boundary of the connected seed image after segmentation, and Figure 1(d) shows the singulated seeds after segmentation.

2.3. Image Augmentation

The accuracy of a CNN is positively correlated with the number of training samples [20]. With consideration of uncertain factors such as the placement angle and position of the seed during the recognition process, we randomly rotated or translated the image before training in order to enhance the sample size. Image augmentation operations were accomplished with MATLAB.

2.4. Feature Extraction and Normalization

We extracted seventy-eight features using the Blob Tool (VM software built-in tools), including colors, textures, shapes, smoothness, morphological texture, and spectral texture under nineteen bands. In order to speed up the process of obtaining the optimal solution and improve the accuracy of the result, we normalized the original data. The original data was expressed as , the normalized data can be expressed as

where the mean and the standard deviation were calculated for each feature of a single seed.

2.5. Support Vector Machine

The SVM splits the targets by finding the separating hyperplane, and the support vector is the data closest to the separating hyperplanes. The distance from the support vector to the separating hyperplane is to be maximized. A kernel function is used in SVM to create designated linear or nonlinear mapping of data in high-dimensional feature space. We compared three kernel functions (radial basis function (RBF), poly and linear) in our study. Scikit-learn [21] was used for the SVM algorithm.

2.6. Convolutional Neural Network

The 1D-CNN architecture is shown in Figure 2(a). We designed the network structure for inputs of one-dimensional features. Seventy-eight features including colors, textures, shapes, smoothness, morphological texture, and spectral texture under nineteen bands are designed as inputs. There are three main convolutional blocks in the architecture. A convolutional block (Conv Block) consists of a convolutional layer and a rectified linear unit (ReLU) layer [22], represents the size of filters, and 512, 256, and 128 represent the number of feature maps. The last block is a fully connected layer followed by a classification layer with Softmax. The Softmax function is defined as where denotes the output of the CNN, denotes the sample index, and denotes the total number of classes. All convolutional layers have a kernel size of 3, stride of 1, and padding of 1. A cross-entropy loss function and the Adam [23] optimization algorithm were used in the model. The initial value of the learning rate was set as 0.001.

The 2D-CNN architecture is shown in Figure 2(b), which was designed for inputting two-dimensional images. Compared with the structure of Figure 2(a), the Conv Block consists of a convolutional layer, BatchNormalization layer, Leaky ReLU layer, and MaxPooling layer; the Leaky ReLU is defined as

The Leaky ReLU layer effectively reduces the loss of information by retaining the negative input. The purpose of using BatchNormalization is to alleviate the problem of gradient disappearance in training and speed up the training of the model. The MaxPooling layer is used to (1) guarantee the position and rotation invariance of features and (2) reduce the number of model parameters and alleviate the problem of overfitting.

3. Results

3.1. Spectral Profiles

Figure 3 shows the average spectral reflectance of fifteen eggplant varieties. Only slight differences existed among the average spectra of different varieties. The spectral curves of fifteen varieties had the similar trend and showed a downward trend in the range of 515~540 nm. The spectral reflectance of 17-38 was the highest among the fifteen varieties; the other varieties were within the same range. The spectral curves of the most varieties crossed or overlapped with each other.

3.2. Classification Model Based on Multiple Features

Discriminant models based on extracted features were developed with SVM and 1D-CNN. Table 2 shows the test accuracy of SVM. RBF, poly, and linear kernel functions were used in SVM. The SVM algorithm with linear kernel function had the best accuracy of 91.28%. The best performing model was CNN, and the classification accuracy was 94.80%. Figures 4(a)4(d) show the training loss, training accuracy, testing loss, and testing accuracy. The loss plummeted with iterations, while the classification accuracy increased quickly, which indicated a rapid convergence. In order to evaluate the performance of the model, a confusion matrix was calculated for the validation set (Figure 5). The number of misclassifications between 17-14 and 17-12 was 2, and all other misclassifications were 1. The accuracy of the validation set was 93.2%, which was slightly lower than the testing set.


AlgorithmKernelAccuracy (%)

SVMRbf87.13
Linear91.82
Poly68.52

3.3. Classification Model Based on Images

We also used 2D-CNN to develop discriminate models, and the classification accuracy was 87.6%. Figures 6(a)6(d) show the training loss, training accuracy, testing loss, and testing accuracy; the trends were consistent with 1D-CNN. Figure 7 shows the confusion matrix for the validation set. Early stopping was adopted to prevent overfitting. The varieties that were difficult for the model to distinguish were listed in Table 3. There were 8 misclassifications between 17-53 and 17-49, 5 between 17-55 and 17-41, 3 between 17-52 and 17-54, and 3 between 17-55 and 17-52. It is obvious that 17-14 and 17-12 were difficult to distinguish with both classification methods (features and images). The accuracy of the validation set was 87.6%, which was slightly lower than the testing set.


Ground truthPredictQuantity

17-5317-498
17-5517-415
17-5217-543
17-5517-523
17-1217-142
17-1417-152
17-2517-242
17-5217-532

4. Discussion

CNN is an end-to-end architecture; it is convenient to train and deploy models. CNN has shown good performance for processing data and images in our experiments. The accuracy of the model established by these quantified features was higher than the model established by images. The advantage of 2D-CNN is that there is no need to design complex algorithms to extract features.

In 1D-CNN, the accuracy of the training set was close to 100% but the testing set was 93.58%. The result showed that a small number of training samples lead to over-fitting during training.

The reflectance of the variety 17-38 in Figure 3 was the highest compared with other varieties, and the classification accuracy for 17-38 was 100% in Figures 5 and 7, which shows that the differences between varieties can be reflected by spectral data. However, the spectral data of 19 bands was not enough to distinguish 15 varieties, so this method was not used in this study.

The phenomenon revealed by this study is noteworthy. It is widely accepted that the shell of the seeds developed from the female parents (integument); the phenotype from the same female parent is relatively correlated [24]. We analyzed the varieties with high misclassifications in Table 3, and it appeared that 17-52, 17-54, and 17-55 are long eggplants and 17-15 and 17-14 are black round eggplants. These varieties have the same female parents (Table 4). We also found that some varieties from the same female parents can be distinguished by morphological and spectral information. It was found that the side shape of the seeds from the same female parents had the same contour (kidney shape), but the umbilicus of the seed was round, elliptical, or triangular with significant differences. For example, the umbilicus of 17-12 is round and 17-14 is triangular. The umbilicus of 17-24 is triangular and 17-25 and 17-26 are round. The results revealed that genetic factors and environmental factors may have a combined influence on seed phenotypes.


VarietiesFemale parent×male parent

17-5TM×CJY
17-12CJY×JYHG
17-14CJY×HQY
17-15CJY×TZSQ
17-24GCBY×HLR
17-25GCBY×HQY
17-26GCBY×TZSQ
17-38N5×TQ1
17-39N5×HQF
17-41TQ1×HQF
17-497#M×14#F
17-52Dr3×3#F
17-53Dr3×7#M
17-54Dr3×7#F
17-55Dr3×11#M

5. Conclusions

In this study, we adopted multispectral imaging with different machine learning methods to discriminate the varieties of eggplant seeds. The SVM and 1D-CNN were used to classify the seeds based on the extracted features, comparing with 2D-CNN without feature extraction. The experiments proved the feasibility of CNN in classification of seed varieties. CNN was significantly better than traditional machine learning algorithms in this study. Theoretically speaking, the shell of seeds comes from the female parents, but our study revealed that genetic and environmental factors can lead to significant differences even if the seeds come from the same female parents. However, this phenomenon is to be further investigated with an experimental design incorporating more varieties as well as a larger sample size.

Data Availability

The data used to support the findings of this study are available from the corresponding authors upon request.

Conflicts of Interest

The authors declare that they have neither competing financial nor nonfinancial interests.

Authors’ Contributions

Sheng Huang, Xiaofei Fan, and Lei Sun contributed equally to this work.

Acknowledgments

This work is supported by the Key R&D Program of Hebei Province (20327403D), Hebei Talent Support Foundation (E2019100006), Talent Recruiting Program of Hebei Agricultural University (YJ201847), University Science and Technology Research Project of Hebei (QN2020444), and State Key Laboratory of North China Crop Improvement and Regulation, Hebei Agricultural University.

References

  1. S. Shrestha, L. Deleuran, M. Olesen, and R. Gislum, “Use of multispectral imaging in varietal identification of tomato,” Sensors, vol. 15, no. 2, pp. 4496–4512, 2015. View at: Publisher Site | Google Scholar
  2. J. Janick, Plant Breeding Reviews. Plant Breeding Reviews, vol. 39, Wiley, New Jersey, 2015.
  3. H. Wang, J. Liu, X. Xu et al., “Fully-automated high-throughput NMR system for screening of haploid kernels of maize (corn) by measurement of oil content,” PLoS One, vol. 11, no. 7, article e0159444, 2016. View at: Publisher Site | Google Scholar
  4. H. P. Haliński, J. Samuels, and P. Stepnowski, “Multivariate analysis as a key tool in chemotaxonomy of brinjal eggplant, African eggplants and wild related species,” Phytochemistry, vol. 144, pp. 87–97, 2017. View at: Publisher Site | Google Scholar
  5. B. Boelt, S. Shrestha, Z. Salimi, J. R. Jørgensen, M. Nicolaisen, and J. M. Carstensen, “Multispectral imaging–a new tool in seed quality assessment?” Seed Science Research, vol. 28, no. 3, pp. 222–228, 2018. View at: Publisher Site | Google Scholar
  6. L. M. Kandpal, S. Lohumi, M. S. Kim, J. S. Kang, and B. K. Cho, “Near-infrared hyperspectral imaging system coupled with multivariate methods to predict viability and vigor in muskmelon seeds,” Sensors and Actuators B: Chemical, vol. 229, pp. 534–544, 2016. View at: Publisher Site | Google Scholar
  7. K. Sendin, M. Manley, and P. J. Williams, “Classification of white maize defects with multispectral imaging,” Food Chemistry, vol. 243, pp. 311–318, 2018. View at: Publisher Site | Google Scholar
  8. S. Shrestha, L. C. Deleuran, and R. Gislum, “Classification of different tomato seed cultivars by multispectral visible-near infrared spectroscopy and chemometrics,” Journal of Spectral Imaging, vol. 5, article a1, 2016. View at: Publisher Site | Google Scholar
  9. M. Huang, C. He, Q. Zhu, and J. Qin, “Maize seed variety classification using the integration of spectral and image features combined with feature transformation based on hyperspectral imaging,” Applied Sciences, vol. 6, no. 6, p. 183, 2016. View at: Publisher Site | Google Scholar
  10. I. Orrillo, J. P. Cruz-Tirado, A. Cardenas et al., “Hyperspectral imaging as a powerful tool for identification of papaya seeds in black pepper,” Food Control, vol. 101, pp. 45–52, 2019. View at: Publisher Site | Google Scholar
  11. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. View at: Publisher Site | Google Scholar
  12. K. Park, Y. . Hong, G. . Kim, and J. Lee, “Classification of apple leaf conditions in hyper-spectral images for diagnosis of Marssonina blotch using mRMR and deep neural network,” Computers and Electronics in Agriculture, vol. 148, pp. 179–187, 2018. View at: Publisher Site | Google Scholar
  13. W. Zhao, L. Jiao, W. Ma et al., “Superpixel-based multiple local CNN for panchromatic and multispectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 7, pp. 4141–4156, 2017. View at: Publisher Site | Google Scholar
  14. H. Wei, H. Yangyu, W. Li, Z. Fan, and L. Hengchao, “Deep convolutional neural networks for hyperspectral image classification,” Journal of Sensors, 12 pages, 2015. View at: Publisher Site | Google Scholar
  15. Z. Qiu, J. Chen, Y. Zhao, S. Zhu, Y. He, and C. Zhang, “Variety identification of single rice seed using hyperspectral imaging combined with convolutional neural network,” Applied Sciences, vol. 8, no. 2, p. 212, 2018. View at: Publisher Site | Google Scholar
  16. E. Levent, “Bearing fault detection by one-dimensional convolutional neural networks,” Mathematical Problems in Engineering, vol. 2017, Article ID 8617315, 2017. View at: Publisher Site | Google Scholar
  17. J. M. Carstensen, M. A. E. Hansen, N. C. K. Lassen, P. W. Hansen, and T. M. Jørgensen, “Creating surface chemistry maps using multispectral vision technology,” in 9th MICCAI - Workshop on Biophotonics Imaging for Diagnostics and Treatment, Lyngby, 2016. View at: Google Scholar
  18. P. E. H. Petersen and G. W. Krutz, Automatic identification of weed seeds by color machine vision, American Society of Agricultural Engineers Meeting, 2016.
  19. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at: Google Scholar
  20. K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, “Return of the devil in the details: delving deep into convolutional nets,” in Proceedings of the British Machine Vision Conference 2014, Nottingham, 2014. View at: Publisher Site | Google Scholar
  21. A. Swami and R. Jain, “Scikit-learn: machine learning in python,” Journal of Machine Learning Research, vol. 12, no. 10, pp. 2825–2830, 2012. View at: Google Scholar
  22. G. E. Hinton and V. Nair, “Rectified linear units improve restricted Boltzmann machines,” in International Conference on International Conference on Machine Learning, Omnipress, 2010. View at: Google Scholar
  23. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in 3rd International Conference for Learning Representations, San Diego, 2014. View at: Google Scholar
  24. M. Huidong and J. Chang, “Genetic appraisal for seed traits,” Jiangsu Journal of Agricultural Sciences, vol. 7, no. 3, pp. 7–12, 1991. View at: Google Scholar

Copyright © 2021 Lei Sun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views120
Downloads127
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.