Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2019, Article ID 3175848, 12 pages
https://doi.org/10.1155/2019/3175848
Research Article

Fuzzy Classification of the Maturity of the Tomato Using a Vision System

1Instituto Tecnológico de Celaya, Celaya 38010, Mexico
2Departamento de Mecatrónica del ITESI, Irapuato 36698, Mexico
3Departamento de Alimentos, Universidad de Guanajuato, Mexico
4Cátedras Conacyt, Mexico

Correspondence should be addressed to Marcos J. Villaseñor-Aguilar; xm.ude.sseti@ronesallivam

Received 29 December 2018; Revised 5 March 2019; Accepted 14 March 2019; Published 4 July 2019

Guest Editor: Jesus R. Millan-Almaraz

Copyright © 2019 Marcos J. Villaseñor-Aguilar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Artificial vision systems (AVS) have become very important in precision agriculture applied to produce high-quality and low-cost foods with high functional characteristics generated through environmental care practices. This article reported the design and implementation of a new fuzzy classification architecture based on the RGB color model with descriptors. Three inputs were used that are associated with the average value of the color components of four views of the tomato; the number of triangular membership functions associated with the components and were three and four for the case of component . The amount of tomato samples used in training were forty and twenty for testing; the training was done using the Matlab© ANFISEDIT. The tomato samples were divided into six categories according to the US Department of Agriculture (USDA). This study focused on optimizing the descriptors of the color space to achieve high precision in the prediction results of the final classification task with an error of -6. The Computer Vision System (CVS) is integrated by an image isolation system with lighting; the image capture system uses a Raspberry Pi 3 and Camera Module Raspberry Pi 2 at a fixed distance and a black background. In the implementation of the CVS, three different color description methods for tomato classification were analyzed and their respective diffuse systems were also designed, two of them using the descriptors described in the literature.

1. Introduction

Tomato is one of the main vegetables consumed by humans for its antioxidant content, vitamins (A, B1, B2, B6, C, and E), and minerals such as potassium, magnesium, manganese, zinc, copper, sodium, iron, and calcium [1]. This fruit provides health benefits in the prevention of chronic diseases such as cancer, osteoporosis, and cataracts. One of the main indicators that allows to know the internal composition of the tomato is its degree of maturity. This characteristic is very important to determine the logistic processes of harvest, transport, commercialization, and food consumption. In this respect, the Department of Agriculture of the United States (USDA) establishes six states of maturity that are Green, Breaker, Turning, Pink, Light red and Red ([2]); these are shown in Figure 1.

Figure 1: Maturity classification according to the US Department of Agriculture (USDA 2018).

In literature, there are research on artificial vision that reports methodologies to estimate the maturity states of the tomato that uses the color as a main characteristic. Tomato maturity estimation models have been proposed based on the use of different models of color space. For example, the model allows to identify the six stages of maturity of the tomato using the Minolta [35]. Reference [6] conducted a study of the firmness and color of the tomato; they reported that the recommended firmness for commercialization was 1.46 N mm-1; they also determined that in the stage of pink maturity of the tomato, the values from Minolta to change from negative to positive magnitude. When the ratio of Minolta of the tomatoes reached 0.6-0.95, those can be easily marketed. On the other hand, [4] estimated the lycopene content in the different stages of maturity of the tomato by means of the foliar area and the color parameters (, , , and hue). This model was done using an artificial neural network (ANN).

On the other hand, the use of the RGB color model has allowed the identification of the maturity of the tomato. As reported by [7], which proposed a methodology to identify red tomatoes for automatic cutting through the use of a robot, this used RGB images analyzed using the relationship between the red-blue component (RB) and red-green (RG) that allowed to formulate the inequalities: and , when these conditions are met, the fruit can be harvested. A similar investigation was carried out by [8], where they compared RGB images with hyperspectral images (in the range of 396-736 nm with a spectral resolution of 1.3 nm using a spectrograph). A linear discriminant analysis was applied to both groups for the classification of the tomatoes in five stages of maturity, which was weighted by a majority vote strategy of the analysis of the individual pixels. The authors document that hyperspectral images were more discriminant than RGB in tomato maturity analysis.

In 2018, [9] developed a system of maturity classification of tomatoes, the system used two types of tomatoes: with defects and without defects. For the fruit’s classification, an artificial backpropagation neural network (BPNN) was used, which was implemented in Matlab©. This system identified the degrees of maturity: red, orange, turning, and green. The architect of the neural network had thirteen inputs that were associated with six functions of color and seven functions of forms, twenty neurons in the hidden layers and one in the output. Reference [10] proposed a method using a BPNN to detect maturity levels (green, orange, and red) of tomatoes of the Roma and Pera varieties.

The color characteristics were extracted from five concentric circles of the fruit, and the average shade values of each subregion were used to predict the level of maturity of the samples; these values were the entries of the BPNN. The average precision to detect the three maturity levels of the tomato samples in this method was 99.31%, and the standard deviation was 1.2%. Reference [11] implemented a classification system based on convolutional neural networks (CNN). The proposed classification architecture was composed of three stages; the first stage managed the color images of three channels that are 200 pixels in height and width. In the second part, it used five layers of CNN that extracted the main characteristics. The convolution kernels are of sizes , , and in order to conserve characteristics, reduce unnecessary parameters, and improve the speed of calculations. Together, it has two layers of max-pooling in the CNN layers. The last part is for the classification of results by a fully connected layer. The experimental results showed an average accuracy of 91.9% with a prediction time less than 0.01 s. Another research was that of [3], who proposed an algorithm in Matlab© Simulink, which employed a 4 megapixel camera, with a resolution of and a frame rate of 30 for the capture of the images; they received a processing that consisted of an erosion and expansion. The classification and identification of the maturity of the tomato were with the use of obtaining the red chroma of the YCbCr color model, which was between 135 and 180. Reference [6] developed a system of classification of maturity of the cherry tomatoes based on artificial vision; in this proposal, they used color, texture, and shape of the nearest -neighbor and classifiers of vectorial support machines to classify the ripened tomatoes.

Currently, with Computer Vision Systems (CVS) and Fuzzy Logic (FL), applications of maturity classification of tomatoes, guavas, apples, mangoes, and watermelons employee have been developed [12]. FL is an artificial intelligence technique that models human reasoning from the linguistics of an expert to solve a problem. Therefore, the logical processing of the variables is qualitative based on quantitative belonging functions [13]. References [14, 4] argue that the classification of the maturity of the elements of study is composed of two systems that are the identification of color and its labeling. For color representation, they used image histograms based on the RGB, HSI, and CIELab color space models; for the automatic labeling of the fruits, they designed a fuzzy system that handled the knowledge base that was transferred by an expert. On the other hand, the proposal made by [15] estimated the level of maturity in apples using the RGB color space; their methodology used four images of different views of the matrix. They proposed four maturity classes, based on a fuzzy system, which were defined as mature, low mature, near to mature, or too mature. The inputs of the diffuse system were the average values of each color map of the segmented images. Reference [13] developed an image classification system of apple, sweet lime, banana, guava, and orange; the system was implemented in Matlab©. The characteristics extracted from each fruit’s image were area, major and minor axis of each sample; these were used as inputs in the diffuse system for their classification. Another similar study was reported by [16], which implemented a diffuse system to classify the guavas in the stages of maturity raw, ripe, and overripe. The proposed classification was based on the apparent color, and their considered three inputs: hue value, saturation, and luminosity.

Following this trend, this paper reports the behavior of tomato maturity based on color in the RGB model, which is the model with commonly commercial digital cameras work because they are mostly built with an optical Bayer filter on the photosensors. A fuzzy system was used in the classification stage. The main contribution of this work focuses on the comparison of color models for the description of tomato maturity stages. In addition, a Raspberry PI was used for the capture and estimation of the output variables.

2. Materials and Methods

2.1. Sample Preparation

In the proposed method, sixty tomato samples were used (acquired in a local trade) and were classified in six stages of maturity (Green, Breaker, Turning, Rosa, Light red and Red). The classification was based on the criteria of the United States Department of Agriculture USDA (1997). The samples were divided into two groups: the training and validation sets as shown in Table 1.

Table 1: Tomato sample division used to train and detection sets.
2.2. Artificial Vision System

Artificial vision systems (AVS) are intended to emulate the functionality of human vision to describe elements in captured images. Some AV’s advantages compared with other proposal are a reduction of cost, improvement of accuracy, increase of precision, and good reliability estimation [14]. Figure 2 shows the AVS system, which is integrated by three sections: (a) the image capture, (b) the lighting subsystem, and (c) the processing subsystem. The first one obtains spatial information and fruit characteristics, the second one maintains the experimental conditions, and the third one obtains several characteristics such as equalizing histograms, highlighting edges, segmenting, labeling components, and tomato maturity [1719].

Figure 2: Elements of the artificial vision system (AVS): (a) subsystem of capture of images, (b) subsystem of illumination, and (c) image processing subsystem.

The images were acquired with the AVS, which was installed in a black box of dimensions ; this last to prevent the influences of the lighting as shown in Figure 2. The AVS was integrated by the Raspberry Pi 3 camera (8 megapixels) placed vertically at 30 cm from the sample at an angle of 28.8°. The lighting used had a ring geometry [20] with a power of 5.4 W and a diameter of 23 cm, and it was placed 30 cm above the samples where the intensity was 200 lux. The processing subsystem was implemented on a Raspberry Pi 3 card that features a Quadcom 1.2 GHz Broadcom BCM2837 64-bit processor with a 1 GB RAM memory. This device has the flexibility to be used in the solution of versatile problems [21].

The proposed system is shown in Figure 3; in the first stage, the RGB images of the samples were acquired. After that, images were segmented to create a vector with averages of the red, green, and blue components, which worked as an input to the fuzzy system.

Figure 3: Workflow of the proposed system.
2.3. Image Acquisition

Four images of each fruit were acquired in each view of the tomato, obtaining a total of 240 images corresponding to 60 fruits. Figure 4 shows the four views of a sample in the green maturity state. The captured images have a resolution of (); they were scaled to a size of () with an intensity of 200 lux.

Figure 4: Sample views in four different directions.
2.4. Image Segmentation

Figures 5(a)5(c) show the process performed on samples using Python 3.7 and OpenCV. The first step captured the images and assigned a maturity level. The second step binarized them in HSV space by using 100<=H<=156, 90<=S<=255, and 0<=V<=255 with a range between 0 and 255. The forth step was to segment each tomato image and labeled them by using an algorithm of the component’s connection. The fifth step was to separate the image segments under 500 pixels, and finally, their respective masks were used to obtain the areas of interest per each sample.

Figure 5: Segmentation of one sample: (a) capture and scaling of the sample image, (b) binarization of the image and noise, and (c) image segmentation by means of the minor area discrimination.
2.5. Attribute Selection

The attributes were selected based on the methodology proposed by [15]. The mean channel’s values of the segmented images were used, and it was also considered that in the initial stages of maturity, the studied tomatoes had a high green content, and its content of red color was very low. As the fruit reached full maturity, the behavior was inverse [14]. The segment mean behavior was mapped by using the image channels of 40 training samples. It also used the RGB color models, the CIELab 1976, and the Minolta ratio as shown in Figures 6, 7, and 8. By using the process previously described, the identification of the six tomato maturity stages was possible, basically due to the direct relationship between the axis’s orthogonality with the data classes.

Figure 6: Mapping of the means of the segments of the RGB channels of the training set.
Figure 7: Mapping of the means of the segments of the channel CIELab 1976 of the training set.
Figure 8: Mapping of the channel segment mean CIELab 1976 of the training set using the Minolta () relation.
2.6. Fuzzification

In this stage, fuzzification had the main purpose to translate the input values into linguistic variables [22]. In this proposed system, a vector created by the average values of the RGB components is used as input variable. The input fuzzification was done using triangular membership functions as shown in Figure 9. These functions were selected for their easy hardware implementation.

Figure 9: Membership functions of the fuzzy system for maturity tomato classification.

It is well known that in the first three maturity stages, a greater sensitivity is required to identify the changes compared with the rest of them. Therefore, in this paper, the membership function related to the green variable entries consisted of four sections. On the other hand, three membership functions were proposed for the blue and red cases, which resulted on six maturity states. Finally, the range value, for the most significant input and output stage, was determined by selecting the linguistic states for each variable, i.e., very, medium, and less.

2.7. Fuzzy System Implementation

The fuzzy system was implemented with the Matlab ANFISEDIT Tool and image capture using Raspberry Pi camera, where a set of data was found and integrated by the mean of the RGB channels of the image and the output was labeled for the samples.

Four variants of the fuzzy system were designed to classify the state of maturity of the tomato. In these, several parameters were maintained, which were the inputs of the system, the number of training epcohs, and the type of membership functions. Table 2 shows the architectures used for each fuzzy system and the error obtained after training, where it can be seen that the designs that presented the least errors were Models 3 and 4. The selected membership function is triangular because of its easy implementation.

Table 2: Fuzzy system training results.

The programing was carried out using the methodology proposed by [23]. The description of each function is shown to follow, where the variable is LR (Low Red), MR (Middle Red), HB (High Red), Low Green (LG), Medium Low Green (MLG), Medium High Green (MHG), High Green (HG), LB Low (Blue), MB (Middle Blue), and HB (High Blue).

2.8. Inferential Logic

The inferential logic was determined by identifying the maximum and minimum averages’ ranges of the RGB components of the training set images. Table 3 shows the maximum and minimum averages of each maturity state according to the USDA. By using the last procedure, it was possible to determine a set of 36 rules that were used in the fuzzy system; the linguistic variables used were low, medium, low average, high, and high average, Table 4.

Table 3: Maximum and minimum range of the averages of the RGB channels for each state of maturity.
Table 4: Inference rules.
2.9. Defuzzification

Defuzzification was done by equation (11), with the 36 rules of inferences obtained for the modeling of maturity. The Takagi-Sugeno fuzzy model is illustrated in Figure 10; represents the weight of the fuzzy rule in the output, and is the weight of the membership function; is the number of rule inferences.

Figure 10: Operation of Takagi-Sugeno rules to classify the maturity of the tomato.
2.10. Fuzzy System Proposal

Three proposed architectures of the fuzzy systems were evaluated for the fruit identification maturity as shown in Figure 11. These used the means of the RGB channels of the segments associated with the image. In the first architecture, it uses the , , and channels as inputs, the second one uses the difference of the and channels that allow identifying the maturity according to the methodology proposed by [7], and the last construction was a change of color model from RGB to CIELab 1976, and the inputs used were , , and and Minolta relation proposed by [4].

Figure 11: Architecture of the fuzzy models: (a) model that uses mean RGB channels of the tomato image segment, (b) model that uses - mean of the tomato image segment, and (c) model that uses , , , and means of the tomato image segment.

To perform the ANFIS’s training, forty samples in the six stages of maturity were used. Table 5 shows the results of the training using 100 epochs of the three proposed models. It can be observed that Model 1 has the lowest training error that is 0.046; this model uses the entries , , and with 3.4 and 3 belonging functions, respectively.

Table 5: Proposed fuzzy system.

3. Results

The results were obtained from the models using a set of 20 samples that were not part of the training set, and they are shown in Table 6. By looking Model 1, it can be noticed that it presented an error of -6, which is the smallest value compared with the other two. On the other hand, Models 2 and 3 managed to correctly classify the entire sample of tests. However, Model 2 did not classify twelve samples of the test set; those are market in italic. The classification error is lower in Model 1 because the descriptor mean of the components of the channels , , and can identify the increase in the mean of the red channel, the decrease in the mean value of the green channel, and the nonlinear behavior of the average values of the blue cannel [14].

Table 6: Output and error of different classification systems.

4. Discussion

According to the results, Models 1 and 3 correctly classified the set of test samples. Additionally, these presented the sum of the lowest squared errors, the fuzzy system designed for RGB components with an averaged value of -6. The architecture used ten membership functions for red, four for green, and three for blue color, which had a reliable performance.

Additionally, Model 3 was a diffuse system that used the averages (, , , and ) of the tomato as inputs; its architecture integrated twelve membership functions, i.e., each input used three. The sum of error of this system was -6. On the other hand, the fuzzy system with an RGB data entry (R-G) had 10 membership functions and a sum of its quadratic classification error was 32.8434.

It can be inferred that using the subtraction (R-G) as a descriptor, the fuzzy classifier hided the information of the R and G components, while discarded the blue component. This system presented difficulties in classifying classes 3, 4, and 5; consequently, their efficiency was very low compared with others. The color representation with the components (, , , and ) seems to help, because few membership functions were used in each entry, but when four entries were considered, the work area was divided into 81 sectors. The color representation with the RGB model collected the direct values of the sensor image, generally of the Bayer type, so that it had the complete information to classify without noise due to the data transformation; this is a clear theoretical advantage of this work. Therefore, the fuzzy system designed with RGB averages used only 36 sectors generated by three membership functions in , 4 in , and 3 in ; this is practical advantage.

In other words, the six tomato’s maturity classification can be reliably done in a RGB color space, mainly due to the nonlinear surfaces created by the fuzzy system or other mathematical functions, which separates each stage. However, the main limitation of the proposed system is that the overall experimentation was carried out in a controlled environment (fixed lighting, fixed distance from the camera to the sample, and a matt black background). This weakness is already considered in the research team, and a proposal will be reported in an upcoming paper.

5. Conclusion

In this work, a CVS was designed using a Raspberry Pi 3, which used tomato maturity degrees according to the USDA criteria with an average error of -6. The acquisition of CVS images was done with the camera module Raspberry Pi 2 in a controlled environment with an illumination intensity of 200 lux, with the aim of reducing the noise on the fruit’s segmentation. Subsequently, several fuzzy systems were evaluated while maintaining the use of the four views to optimize the number of triangular membership functions to reduce the classification error. Based on the result, the system obtained a good classification, surpassing the system that uses the CIELab color space model and the R-G color space model. Together with this study, the report on the relationship with for the identification of tomato maturity by is confirmed [36].

One aspect that can be highlighted is the use of the Raspberry Pi 3 and the camera module Raspberry Pi 2, which allowed to create applications of easy technology transfer and rapid implementation focused on the classification of fruit and vegetable maturity. This system can be extended to the CVS’s estimation of soluble solids, vitamins, and antioxidants in tomato.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Authors’ Contributions

Marcos Jesús Villaseñor Aguilar contributed to the implementation of the image acquisition system of the tomato samples. Also, he developed the capture and processing system software for the determination of tomato maturity levels. J. Enrique Botello Alvarez contributed to the conceptualization, the design of the vision system experiment, the tutoring, and the supply of study materials, laboratory samples, and equipment. F. Javier Pérez-Pinal contributed to the preparation, creation of the published work, writing of the initial draft, and validation of the results of the vision system. Miroslava Cano-Lara focused on the validation of the vision system of acquisition and of the algorithms. M. Fabiola León Galván focused on the revision of the results in the classification system and in the conceptualization. Micael-Gerardo Bravo-Sánchez contributed to the methodology design, the tutoring, and the establishment of the design of the vision system experiment. Alejandro Israel Barranco Gutierrez led the supervision and responsibility of the leadership for the planning, the execution of the research activity, the technical validation, and the follow-up of the publication of the manuscript.

Acknowledgments

The authors greatly appreciate the support of TecNM, CONACyT, PRODEP, UG, ITESI, and ITESS.

References

  1. A. Gastélum-Barrios, R. A. Bórquez-López, E. Rico-García, M. Toledano-Ayala, and G. M. Soto-Zarazúa, “Tomato quality evaluation with image processing: a review,” African Journal of Agricultural Research, vol. 6, no. 14, pp. 3333–3339, 2011. View at Google Scholar
  2. K. Choi, G. Lee, Y. J. Han, and J. M. Bunn, “Tomato maturity evaluation using color image analysis,” Transactions of the ASAE, vol. 38, no. 1, pp. 171–176, 1995. View at Publisher · View at Google Scholar
  3. S. R. Rupanagudi, B. S. Ranjani, P. Nagaraj, and V. G. Bhat, “A cost effective tomato maturity grading system using image processing for farmers,” in Proceedings of 2014 International Conference on Contemporary Computing and Informatics (IC3I), pp. 7–12, Mysore, India, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. M. A. Vazquez-Cruz, S. N. Jimenez-Garcia, R. Luna-Rubio et al., “Application of neural networks to estimate carotenoid content during ripening in tomato fruits (Solanum lycopersicum),” Scientia Horticulturae, vol. 162, pp. 165–171, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. R. Arias, T.-C. Lee, L. Logendra, and H. Janes, “Correlation of lycopene measured by HPLC with the , , color readings of a hydroponic tomato and the relationship of maturity with color and lycopene content,” Journal of Agricultural and Food Chemistry, vol. 48, no. 5, pp. 1697–1702, 2000. View at Publisher · View at Google Scholar · View at Scopus
  6. V. Pavithra, R. Pounroja, and B. Sathya Bama, “Machine vision based automatic sorting of cherry tomatoes,” in 2015 2nd International Conference on Electronics and Communication Systems (ICECS), pp. 271–275, Coimbatore, India, 2015. View at Publisher · View at Google Scholar · View at Scopus
  7. Y. Takahashi, J. Ogawa, and K. Saeki, “Automatic tomato picking robot system with human interface using\nimage processing,” in IECON’01. 27th Annual Conference of the IEEE Industrial Electronics Society (Cat. No.37243), pp. 433–438, Denver, CO, USA, 2001. View at Publisher · View at Google Scholar
  8. G. Polder, G. W. A. M. van der Heijden, and I. T. Young, “Spectral image analysis for measuring ripeness of tomatoes,” Transactions of the ASAE, vol. 45, no. 4, pp. 1155–1161, 2002. View at Publisher · View at Google Scholar
  9. S. Kaur, A. Girdhar, and J. Gill, “Computer vision-based tomato grading and sorting,” in Advances in Data and Information Sciences, pp. 75–84, Springer, 2018. View at Publisher · View at Google Scholar · View at Scopus
  10. P. Wan, A. Toudeshki, H. Tan, and R. Ehsani, “A methodology for fresh tomato maturity detection using computer vision,” Computers and Electronics in Agriculture, vol. 146, pp. 43–50, 2018. View at Publisher · View at Google Scholar · View at Scopus
  11. L. Zhang, J. Jia, G. Gui, X. Hao, W. Gao, and M. Wang, “Deep learning based improved classification system for designing tomato harvesting robot,” IEEE Access, vol. 6, pp. 67940–67950, 2018. View at Publisher · View at Google Scholar · View at Scopus
  12. A. R. Mansor, M. Othman, M. Nazari, and A. Bakar, “Regional conference on science, technology and social sciences (RCSTSS 2014),” in Business and Social Sciences, p. 288, Springer, Malasya, 2016. View at Publisher · View at Google Scholar
  13. H. G. Naganur, S. S. Sannakki, V. S. Rajpurohit, and R. Arunkumar, “Fruits sorting and grading using fuzzy logic,” International Journal of Advanced Research in Computer Engineering and Technology, vol. 1, no. 6, pp. 117–122, 2012. View at Google Scholar
  14. N. Goel and P. Sehgal, “Fuzzy classification of pre-harvest tomatoes for ripeness estimation – an approach based on automatic rule learning using decision tree,” Applied Soft Computing, vol. 36, pp. 45–56, 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. M. Dadwal and V. K. Banga, “Estimate ripeness level of fruits using RGB color space and fuzzy logic technique,” International Journal of Engineering and Advanced Technology (IJEAT), vol. 2, no. 1, 2012. View at Google Scholar
  16. R. Hasan, S. Muhammad, and G. Monir, “Fruit maturity estimation based on fuzzy classification,” pp. 27–32, Kuching, Malaysia, 2017. View at Publisher · View at Google Scholar · View at Scopus
  17. M. S. Acosta-Navarrete, J. A. Padilla-Medina, J. E. Botello-Alvarez et al., “Instrumentation and control to improve the crop yield,” in Biosystems Engineering: Biofactories for Food Production in the Century XXI, R. Guevara-Gonzalez and I. Torres-Pacheco, Eds., pp. 363–400, Springer, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. A. K. Seema and G. S. Gill, “Automatic fruit grading and classification system using computer vision: a review,” in 2015 Second International Conference on Advances in Computing and Communication Engineering, pp. 598–603, Dehradun, India, 2015. View at Publisher · View at Google Scholar · View at Scopus
  19. B. Zhang, W. Huang, J. Li et al., “Principles, developments and applications of computer vision for external quality inspection of fruits and vegetables: a review,” Food Research International, vol. 62, pp. 326–343, 2014. View at Publisher · View at Google Scholar · View at Scopus
  20. D. Wu and D.-W. Sun, “Colour measurements by computer vision for food quality control--a review,” Trends in Food Science & Technology, vol. 29, no. 1, pp. 5–20, 2013. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Pagnutti, R. E. Ryan, G. Cazenavette et al., “Laying the foundation to use raspberry pi 3 V2 camera module imagery for scientific and engineering purposes,” Journal of Electronic Imaging, vol. 26, no. 1, article 013014, 2017. View at Publisher · View at Google Scholar · View at Scopus
  22. V. A. Marcos, Á. T. Erik, R. A. Agustín, O. M. Horacio, and P. M. José A, “Técnicas de inteligencia artificial para el control de estabilidad de un manipulador paralelo 3RRR,” Revista De Ingeniería Eléctrica, Electrónica Y Computación, vol. 11, no. 1, 2013. View at Google Scholar
  23. B. Gutiérrez, Á. L. Cárdenas, and F. P. Pinal, “Implementación de sistema difuso en arduino uno,” November 2016, https://www.researchgate.net/profile/Alejandro_Barranco_Gutierrez5/publication/309676195_Implementacion_de_sistema_difuso_en_Arduino_Uno/links/581cc82f08ae12715af20b4e/Implementacion-de-sistema-difuso-en-Arduino-Uno.pdf.