Table of Contents Author Guidelines Submit a Manuscript
Journal of Spectroscopy
Volume 2018, Article ID 8949741, 12 pages
Research Article

Machine Learning Applied to Near-Infrared Spectra for Chicken Meat Classification

1Department of Computer Science (DC), State University of Londrina (UEL), 86057-970 Londrina, PR, Brazil
2Department of Zootechnology, State University of Londrina (UEL), 86057-970 Londrina, PR, Brazil
3Science Institute of Mathematics and Computers (ICMC), University of São Paulo (USP), 13566-590 São Carlos, SP, Brazil
4Department of Food Engineering, University of Campinas (Unicamp), 13083-862 Campinas, SP, Brazil
5Department of Food Science, Federal University of Technology (UTFPR), Londrina 86020-430, PR, Brazil

Correspondence should be addressed to Douglas Fernandes Barbin; rb.pmacinu@nibrabfd

Received 29 March 2018; Accepted 24 June 2018; Published 7 August 2018

Academic Editor: Maria Carmen Yebra-Biurrun

Copyright © 2018 Sylvio Barbon Jr. et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Identification of chicken quality parameters is often inconsistent, time-consuming, and laborious. Near-infrared (NIR) spectroscopy has been used as a powerful tool for food quality assessment. However, the near-infrared (NIR) spectra comprise a large number of redundant information. Determining wavelengths relevance and selecting subsets for classification and prediction models are mandatory for the development of multispectral systems. A combination of both attribute and wavelength selection for NIR spectral information of chicken meat samples was investigated. Decision Trees and Decision Table predictors exploit these optimal wavelengths for classification tasks according to different quality grades of poultry meat. The proposed methodology was conducted with a support vector machine algorithm (SVM) to compare the precision of the proposed model. Experiments were performed on NIR spectral information (1050 wavelengths), colour (CIE, chroma, and hue), water holding capacity (WHC), and pH of each sample analyzed. Results show that the best method was the REPTree based on 12 wavelengths, allowing for classification of poultry samples according to quality grades with precision. The selected wavelengths could lead to potential simple multispectral acquisition devices.

1. Introduction

Near-infrared spectroscopy (NIRS) has been used for prediction of physicochemical properties of food, being applied to objective control and monitoring of food quality [1]. Also, it is a sustainable alternative as it requires no chemicals that might harm the environment and are hazardous to human beings. The near-infrared spectrum comprises a large set of overtones and combination bands. Selecting a few essential wavelengths related to the response information can reduce significantly the amount of data to be analyzed, providing information for the development of multispectral systems. In this way, multivariate statistical methods could be used for extraction of detailed information of the spectra [2].

Implementation of NIRS as a process analytical technology (PAT) to the food industry involves a multidisciplinary approach in which computational intelligence (CI), particularly machine learning (ML) [310], has been investigated. The main advantage of CI is its capacity of handling multiple parameters, facilitating fast and accurate evaluation of samples in an industrial environment [11].

Recently, ML techniques application has been investigated for several food processing needs, including prediction and assessment of food quality [3, 4, 1218]. Wang et al. [13] predicted the total viable counts (TVC) in pork using support vector machines (SVM), showing the advantage of a rapid and readily performed analysis obtaining coefficient of correlation of .

Multilayer perceptron (MLP) neural network was used to correlate Fourier transform infrared (FTIR) spectral data with beef spoilage, with good performance of the classifier with 10 neurons in the hidden layer providing a overall correct classification [12].

Argyri et al. [15] explored SVM applied to beef samples under different packaging conditions by spectroscopy and sensory analysis in order to predict fresh, semifresh, and spoiled samples. It was reported that the ML techniques (including artificial neural networks (ANN)) provided improved prediction models with accuracy, for the various groups when compared to multivariate statistical methods. Qiao et al. [18] predicted beef eating quality attributes, namely, colour, ultimate pH, and slice shear force (SSF) using spectroscopy methods, achieving prediction results over using SVM on three datasets.

One of the SVM limitations is the selection of the kernel function and its hyperparameters values. SVM cannot identify more than two label (binary classification) data efficiently [19, 20]. Another disadvantage is complex training for a large training set, as each class must be taught against all data from all other classes [21], thus making SVM application in the meat industry more challenging [13]. The disadvantage of approach parametrization also occurs in ANN architecture, which uses empirical experimentation. In addition, a choice of a kernel demands experience on the ANN types, such as backpropagation neural network (BPNN), multilayer perceptron (MLP), or radial basis function (RBF). Hence, both approaches (ANN and SVM) can only handle numerical attributes, which in practical terms demand data preprocessing and normalization.

Decision Tree (DT) induction algorithms allow handling the limitation of a high number of classes (different from traditional SVM kernels), empirical selection, and ANN setup [20]. The majority of the DT induction algorithms can handle different types of attributes (numerical/categorical) and even missing values. Additionally, DTs yield accurate multiclass classifiers and improve predictive performance with multiclass problems directly [22].

REPTree is a fast learner DT induction algorithm that builds a Decision Tree based on information gain as the splitting criterion and prunes it by reduced-error pruning [23, 24]. REPTree is a useful Decision Tree method because of the ability to deal with different scenarios [23]. A comparison among different DT and other highly accurate algorithms (e.g., random forest) demonstrated that REPTree obtained the best performance of DT algorithms [25]. When compared to J48 and SimpleCart, REPTree achieved the best results regarding the reduced-error pruning [24]. A previous work reported the comparison between SVM and REPTree applied to the physical and chemical attributes of chicken meat, namely, pH, WHC, and colour features. The results show that REPTree achieved better results for classification of samples according to quality grades [26].

M5P is another algorithm to build Decision Trees, where leaves are linear models [22]. It is useful for wavelength selection and has advantages compared to traditional prediction algorithms, including the ability to deal with both categorical and continuous variables and to handle variables with missing values [27]. Zhang et al. [28] presented a comparison among M5P and two linear regressors: ridge linear regression (RLR) and support vector regression (SVR) for multilabel classification focusing on space dimension reduction.

Witten et al. [22] described the Decision Table (DTable) as a type of a classifier for scheme-specific attribute selection. Subramanian et al. [29] described that the DTable is better than DT in the problem-solving task when the number of attributes is large, existing the risk of ambiguities and omission. In [30], it was applied to understand the features in a fault diagnosis scenario owing to facilitate the model design.

Consumption of chicken meat has increased worldwide, with chicken processing industry playing an important economic role. Guaranteeing high-quality chicken meat is essential to keep the supply chain and a major demand from consumers.

Physical and chemical parameters such as colour, water holding capacity (WHC), and pH are often used to classify chicken meat in quality grades, such as pale, soft, and exudative (PSE); dark, firm, and dry (DFD); and normal (N) or pale (P) samples [31]. However, these techniques are laborious and time-consuming for the fast-processing lines of the meat processing industry [32]. Thus, novel methods for fast, objective, and reliable meat quality classification are needed.

Thus, the main objective of the current work is to use machine learning approaches on different wavelengths obtained by NIRS spectra, colour (CIE, chroma, and hue), WHC, and pH to classify chicken meat samples according to quality grades. Considering the limitations of some ML approaches, in this work, we perform the quality assessment based on Decision Trees and Decision Table, more precisely: M5P, REPTree, and Decision Table algorithm. In addition, we further extend previous tests [26]. In order to compare the selected approaches with current literature, we use an SVM based on training by sequential minimal optimization (SMO) with a normalized PolyKernel, enhanced for multiclass scenarios based on feature subsets using best-first search. To evaluate the supervised ML approaches, the classification of selected wavelengths was based on class labels (PSE, DFD, N, and P).

2. Materials and Methods

2.1. Preparation of Poultry Samples

Slaughtered chicken breast fillets (pectoralis major muscle) were selected by an experienced analyst to comprise as large variation in quality features as possible. All samples were supplied by local retailer in two batches: samples, within 5 h after slaughtering and transported under refrigerated conditions to the Laboratory of Food Science at State University of Londrina, Londrina-PR, Brazil, for further analysis [33]. The central part of each sample was carefully trimmed with a surgical scalpel to fit into a sample cell (ring cup) for NIR spectral acquisition. Subsequently, samples were minced using a kitchen chopper for 10 s, and NIR spectra were then acquired for minced meat samples. Near-infrared spectra were collected and analyzed in a near-infrared spectrometer, FOSS NIR Analyzers XDS™, in the wavelength range of 400–2500 nm.

2.2. Quality of Poultry Meat

Chicken quality attributes were measured at 48 h postmortem, since it is within this time that most of the biochemical changes in meat take place [34, 35]. After a 30 min blooming period, ultimate pH values were measured using a Testo 205 (Testo AG, Lenzkirch, Germany); the average of two measurements was taken for each sample.

Colour features were calculated as the average of four consecutive measurements at random locations of samples using a Minolta colorimeter (CR 400, D65 illuminant, and 10° observer, Konica Minolta Sensing Inc., Osaka, Japan) after calibration with a standard ceramic tile. Colour was expressed in terms of values for lightness (), redness (), and yellowness () using the Commission Internationale de l′Eclairage (CIE) colour system [36, 37].

Water holding capacity (WHC) was calculated based on meat water loss when pressure is applied on the muscle [38]. Cubic pieces of samples weighing 2 g were laid between two filter paper circles placed on acrylic plates, on which a 10 kg weight was put for 5 min, with the samples and then removed from the filter papers, and weighted. Water loss was calculated as the difference between the initial and final weight. Results were expressed as the percentage of drip loss relative to the initial sample weight.

All samples were preclassified into four different quality grades based on colour reflectance and ultimate pH, namely, pale (P) ( , ), pale, soft, and exudative (PSE) (, ); dark, firm, and dry (DFD) ( ,  ); and normal (N) (,). Values are based on information adapted from Barbin et al. [1], with samples within these ranges representing the four classes. After classifying the samples, the dataset was comprised of 38 normal, 24 PSE, 89 pale, and 7 DFD samples.

2.3. Wavelength Selection

In order to achieve an efficient dimensionality reduction, correlation-based feature selection (CFS) was applied to remove irrelevant and redundant information from the spectra. This method is based on a degree of dependence or predictability of a wavelength with another, indicating the best subset of features for classification. Yu et al. [39] described that CFS exploits best-first search based on some correlation measurement that evaluates the appropriateness of a subset by the individual predictive ability of each feature and the degree of correlation between them. It was also applied a Decision Table, REPTree, and M5P classifiers that have an inherent characteristic of attribute selection, reducing dimensionality, and providing a classification model, were applied [40].

The first step of feature selection was to identify the relevant technological (pH, WHC, and colour) attributes assessed by classification in categories PSE, P, N, and DFD. The referenced quality attributes were processed by CFS, and the best subset was obtained (Figure 1).

Figure 1: Fluxogram of proposed steps for attribute selection and wavelength selection.

After identification of the best subset merit, wavelength selection was performed focusing on attributes of the best subset. Considering an imbalanced dataset, a resampling method was applied to obtain a uniform class distribution. Thus, the best subset was calculated in different scenarios: balanced and imbalanced dataset to validate the best subset obtained.

In order to obtain a unique set of attributes, named as , a merge of best subset attributes of raw (original subset), , and the resampled subset, , was performed.

The second step was the application of algorithms CFS, Decision Table, REPTree, and M5P to describe the correlation between and wavelengths, individually. Finally, the second merge among best wavelengths for each a results in the final wavelengths that were used as a sample descriptor in the classification task. CFS, Decision Table, REPTree, and M5P had different sets of wavelengths. Descriptors were referred as , , , and . SVM, as SMO support vector machine based on polynomial kernel [41, 42], was applied in , all wavelengths, and traditional parameters to establish a comparison with traditional ML approaches because SVM has no dimension reduction properties.

2.4. Classifier Evaluation

Wavelength selection was compared by coefficient of correlation (CC), mean absolute error (MAE), and root mean squared error (RMSE). The first illustrates a quantitative value of the relationship between the subset of wavelengths, W, and the most significant traditional attributes found, . MAE measures the average magnitude of the relationship between and W. Finally, RMSE is a quadratic scoring of the average magnitude of the error. In other words, the greater difference between MAE and RMSE illustrates more significant variance in individual errors of samples. A similar value of RMSE and MAE means that all the errors present the same magnitude.

The accuracy of the model for quality classification was assessed by confusion matrices as the final output from 10-fold cross-validation (CV) over 30 repetitions. This was adopted because it was used as a four-class dataset (namely, PSE, DFD, P, and N) without overlapping. Confusion matrices represent appropriate evaluation criteria to select the most suitable classifiers (REPTree, M5P, and Decision Table).

True positive (TP) and true negative (TN) mean that a sample was correctly recognized, and it is the desired result. False positive (FP) and false negative (FN) occur when the ML approach, incorrectly, classifies a sample assigning a wrong class.

The average per-class effectiveness of a classifier was measured by average accuracy. The average per-class agreement of the data class labels with those of classifiers was estimated by , and the average per-class effectiveness of a classifier to identify class labels was estimated by . F-Measure is the harmonic mean of Precision and Recall and allows us to determine if one algorithm is superior to another as a particular goal. The results were also summarized in a critical difference (CD) diagram, as previously proposed in [43].

3. Results and Discussion

3.1. Wavelength Subset Selection

As Table 1 indicates, pH and present a gradient distribution along with the classes addressed. All the samples matched the quality criteria used, supporting the classification into one of the four possible classes. The poultry dataset consisted of 158 samples, comprising 24, 86, 41, and 7 samples of classes PSE, P, N, or DFD, respectively. Step 1 (Figure 1) consisted in the application of CFS in order to find the best subset of attributes based on subset merit. The selected attributes were   with .

Table 1: Statistical information of the original dataset.

The same approach was performed on the balanced dataset to corroborate the subset found. The balanced dataset was created using a technique of resampling that removes the sample of maximal classes, namely, PSE [24], P (86), and N [41]. After resampling, the balanced dataset was composed of 12, 7, 6, and 6 samples of PSE, P, N, and DFD, respectively. The selected attributes were pH and that found   with . The range of pH values in the database was 5.62 and 6.43, with an average of 5.95 and standard deviation of 0.15. Luminosity values were in the range between 43.53 and 67.53, with an average of 57.32 and a standard deviation of 6.43.

After subset merging,   was calculated. Step 2 was performed using CFS, DTable, REPTree, and M5P algorithm to identify the most relevant wavelengths for   and , as shown in Table 2.

Table 2: Wavelengths selected by correlation-based feature selection (CFS), Decision Table (DTable), REPTree, and M5P algorithms using pH and compared by coefficient of correlation (CC), mean absolute error (MAE), root mean squared error (RMSE), and merit.

CFS evaluation was based on Merit and was not compared to the others, which were evaluated by coefficient of correlation and error rates. Independent of measurement, the higher correlation (or Merit) was obtained by selected wavelengths to perform prediction.

M5P algorithm identified 12 wavelengths for pH description and obtained 0.80 as a coefficient of correlation, obtaining the higher pH correlation value and lower error (MAE and RMSE). Based on these 12 wavelengths, M5P was capable of reducing to only linear model (Appendix A.1), without designing a tree.

For , M5P algorithm achieved the highest correlation (0.95) and lower error values (Figure 2). Thus, the combination of wavelengths identified by M5P comprises , used for further classification.

Figure 2: Decision Tree designed by M5P for prediction, by using selected wavelengths, as shown in Table 2.

Considering , REPTree and M5P found a similar amplitude value of 2.29 as a threshold to obtain a classification (Figures 2 and 3). It is possible to observe that REPTree applied the second verification using or to achieve the final result (Figure 3). For pH, the REPTree model creates a tree (Figure 7) based on 6 wavelengths, not observed in M5P selection. However, even M5P containing a lower number of nodes achieves superior correlation results, justified by the use of linear models to identify a class in the leaf node.

Figure 3: Decision Tree of wavelengths that are correlated to prediction using REPTree, as shown in Table 2.

A significant result regarding wavelength reduction was observed in Decision Table, which reduces identification to only one wavelength, (Table 3). Using and a Decision Table, a coefficient of correlation of 0.91 by 10 rules was obtained. For , Decision Table algorithm obtained 37 rules based on 5 wavelengths, as shown in Table 7.

Table 3: Decision Table for identification based only on .

Among the selected wavelengths, it was observed that , which is related to the red colour in the visible spectrum, was into the best subsets for , , and for pH identification. Independently of the algorithm, pH required more wavelengths than , an average of 6 wavelengths against 3.5, respectively.

Most of the wavelengths identified are closely related to the yellow-red colour in the visible spectrum (570–750 nm), which is the main range where chicken meat colour varies. Regarding the NIR spectrum, the main wavelengths identified were 968 nm, which is related to the O-H stretching associated to water, 1354 nm, associated to C-H stretching, and 1880 nm, associated to O-H stretching [44].

3.2. Poultry Quality Classification

To compare the algorithms, SVM was induced over and full-wavelengths dataset; DTable over and ; and REPTree over , full wavelengths, and traditional parameters, as shown in Table 4.

Table 4: , , , and attributes and poultry classification results (pale, soft, and exudative (PSE); dark, firm, and dry (DFD); normal (N) or pale (P)) obtained by support vector machine (SVM), correlation-based feature selection (CFS), Decision Table (DTable), REPTree, and M5P in mean of average true positive (TP), false positive (FP), Precision, Recall, and F-Measure after 30 repetitions.

After 30 repetitions, the standard deviation of weighted average was about 0.006 (DTable), 0.009 (Decision Table over M5P selection), 0.017 (REPTree), and 0.004 (SVM) for data presented in Table 4. The standard deviation of the experiments presented in Table 5 was about 0.008 for REPTree and 0.007 for SVM. Finally, the standard deviation of experiments highlighted in Table 6 was about 0.002, 0.008, and 0.007 for SVM, REPTree, and REPTree over pH and L, respectively.

Table 5: Poultry classification results (pale, soft, and exudative (PSE); dark, firm, and dry (DFD); normal (N) or pale (P)) obtained by support vector machine (SVM) and REPTree in mean of average true positive (TP), false positive (FP), Precision, Recall, and F-Measure after 30 repetitions.
Table 6: Traditional attributes and poultry classification results (pale, soft, and exudative (PSE); dark, firm, and dry (DFD); normal (N) or pale (P)) obtained by support vector machine (SVM) and REPTree in mean of average true positive (TP), false positive (FP), Precision, Recall, and F-Measure after 30 repetitions.

Considering the approaches that handle the selected wavelengths, REPTree achieved the best results with higher precision (0.740). When compared to SVM, the other algorithms provided better results for multiclass evaluation. This was observed for PSE and DFD samples, where the values for parameters , , , and F-Measure were zero, meaning that none of these samples was correctly classified. However, P and N samples achieved good results of by SVM, as well as F-Measure. However, the poor results observed for PSE and DFD corroborate previous investigations [13, 1921], highlighting the limitation on a multiclass scenario.

The ability to overcome this limitation was observed for REPTree. This algorithm was superior to SVM but was not feasible for multiclass identification based on . It is possible to see in Table 4 that DTable obtained a zero value in measurements for PSE samples. Better results of DTable was on , which achieved the lower FP rate on PSE samples. In general, it was observed that REPTree was the best approach, followed by DTable on M5P subset. Pure DTable and SVM on the CFS subset presented limitations on the multiclass scenario currently investigated.

It was observed that DFD and PSE were the classes with the smaller number of samples misclassified by SVM and DTable. There were fewer DFD samples (7) compared to P samples (86). To investigate the balancing effect over SVM, DTable, and DTs, the same experiments on the balanced dataset were performed, and the results obtained were similar. SVM and DTable algorithms are limited to handle a multiclass scenario, thus being better for binary problems.

SVM and REPTree were compared to support the hypothesis of high dimensionality influence on classification accuracy. The results of SVM applied to the full wavelengths dataset are presented in Table 5. The same behavior of limitation on multiclass scenarios (recognition of PSE and DFD was deficient) and lower results in SVM precision (0.531) were performed.

REPTree achieved good results, based on full wavelengths: TP of 0.778, FP of 0.199, precision of 0.747, recall of 0.778, and F-Measure of 0.741 as observed in Table 5. REPTree provided slightly superior results over the wavelengths subset when using all wavelengths, with a difference of 0.006 of F-Measure.

3.3. Comparison between NIR and Traditional Attributes

REPTree was applied on traditional parameters for classification of samples, achieving better results in comparison to NIR wavelengths. It achieved the higher value of TP and the lower value of FP with an F-Measure of 0.981. The results obtained for REPTree applied on traditional parameters are similar, once REPTree in both cases selects only pH and to construct the regression tree (Table 6). This fact is based on the strategy of Decision Tree being based on the most informative features computed by information gain/ratio. In other models, REPTree algorithm identifies pH, and is sufficient to solve the classification problem. It is possible to see in Table 6 that SVM algorithm over traditional parameters shows the same behavior of previous experiments, which could not handle the multiclass problem with robustness, obtaining 0.621 average precision. CFS was applied on the best subset (3.1) to observe the SVM algorithm over a subset of traditional parameters. An average precision of 0.626, similar to results based on all traditional parameters, was obtained.

An advantage of wavelength selection is the complexity reduction of Decision Tree generated. It is possible to observe a DT composed by four nodes (wavelengths), simpler than Decision Tree created by whole wavelengths (Figure 4). In this case, the Decision Tree created had four levels with ten nodes, a more complex solution as shown in Figure 5. This figure is an example of a tree designed, and once we perform 30 repetitions to evaluate the precision of the method, in other words, the wavelengths amplitudes can change for each model, but the wavelength selected was the same.

Figure 4: Example of Decision Tree of for poultry classification created by the REPTree algorithm.
Figure 5: Example of Decision Tree of whole wavelengths for poultry classification created by REPTree algorithm.

The complexity can be exemplified by the number of paths to reach the leaves of the tree. Based on DT presented in Figure 4, it was possible to classify PSE, DFD, and N samples with just one path for each class. In Figure 5, only DFD provided this observation. For instance, PSE has three different paths from the root to the leaf of classification.

In order to compare the results obtained, Friedman’s statistical test [43] with a significance level at was applied. The null hypothesis here states that the performances of the induced classifiers are equivalent regarding the averaged accuracy per class. Any time the null hypothesis was rejected, the Nemenyi post hoc test was applied, stating that the performance of two different algorithms is significantly different whether the corresponding average ranks differ by at least a critical difference (CD) value. When multiple classifiers are compared in this way, the results can be represented graphically with a simple critical difference (CD) diagram.

Figure 6 shows the results of the statistical tests.

Figure 6: Comparison of the averaged accuracies per-class values of the classifiers according to the Nemenyi test. Groups of classifiers that are not significantly different (at ) are connected.

Thus, according to the results, no significant differences were observed when comparing REPTree method to SVM (with the best subset by CFS) and DTable (over M5P selected wavelengths). SVM without wavelength selection was the worst method overall, while the DTable was the best one. This suggests that the DTable method is the best approach, but this algorithm cannot handle PSE samples. SVM based on CFS presented the same problem for PSE and DFD. Thus, REPTree model is a feasible choice (Figure 4).

Based on the methodology applied in the current work, it was possible to observe that ML approaches could have a substantial impact on spectral data analysis. Differently from traditional applications of SVM and ANN, which require several parametrizations and create a ”black box” output model, the results presented show the potential of Decision Table and Decision Tree as an alternative to performing a multivariate spectral subset selection, in addition to constructing accurate and visual prediction models.

The step of attribute selection contributed to a best wavelengths comprehension, diminishing dimensionality of the problem mainly to build a classification model with reduced overfitting.

The wavelengths detected as optimal for classification task are in the visible range of the spectra which allow the creation of simple acquisition devices. This equipment could substitute the use of colorimeter, pH meter, texture meter, and WHC evaluation environment. Another advantage is the alternative of traditional NIRS equipment, where a simple device limited to a few wavelengths could achieve similar analysis. On the contrary, a pH meter associated with a colorimeter could provide satisfactory results with similar accuracy to NIR, by using proper data processing techniques. In addition, it is common practice to measure pH in the meat processing industry, as it affects several other attributes (i.e., colour and WHC). Predicting the pH could be an alternative to compare the model accuracy to the measured value of this attribute. In addition, for industrial processing lines, it could represent a fast method for a huge number of samples.

4. Conclusion

Poultry meat quality assessment is possible based on spectral analysis considering few selected NIR wavelengths. All wavelengths are in the visible range of the spectra which allow the creation of simple acquisition devices based on low-cost components.

On the contrary, the use of Decision Table and Decision Trees induction algorithms to avoid complex configurations or necessity of expertise on a particular technique was proposed. SVM and DTable presented limitations when applied to multiclass scenarios. DT models (REPTree) obtained superior results of performance while achieving a comprehensive model that describes the optimal wavelengths.


A.1. Linear Model Obtained by M5P to Calculate pH, as in Section 3.1

A.2. Decision Tree Designed by REPTree for pH Prediction, as in Section 3.1

Figure 7

A.3. Decision Table for Prediction based on , , , , and by the Use of 37 Different Rules

Table 7: Decision Table for identification based only on 600 nm, 602 nm, 604 nm, 608 nm, and 622 nm.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.


The authors acknowledge a preliminary discussion concerning the application of SVM and REPTree dealing with physical and chemical features of chicken meat, as presented in a previous paper [26]. The authors also acknowledge Professor Massami Shimokomaki (in memoriam) for his contribution to this research. The authors would like to thank São Paulo Research Foundation (FAPESP), Young Researchers Award (Grant no. 2015/24351-2), and the Brazilian National Council for Scientific and Technological Development (CNPq) (Grant no. 404852/2016-5).


  1. D. F. Barbin, C. M. Kaminishikawahara, A. L. Soares et al., “Prediction of chicken quality attributes by near infrared spectroscopy,” Food Chemistry, vol. 168, pp. 554–560, 2015. View at Publisher · View at Google Scholar · View at Scopus
  2. J. U. Porep, D. R. Kammerer, and R. Carle, “On-line application of near infrared (NIR) spectroscopy in food production,” Trends in Food Science and Technology, vol. 46, no. 2, pp. 211–230, 2015. View at Publisher · View at Google Scholar · View at Scopus
  3. T. Wauters, K. Verbeeck, P. Verstraete, G. V. Berghe, and P. De Causmaecker, “Real-world production scheduling for the food industry: an integrated approach,” Engineering Applications of Artificial Intelligence, vol. 25, no. 2, pp. 222–228, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. S. Sharifzadeh, L. H. Clemmensen, C. Borggaard, S. Støier, and B. K. Ersbøll, “Supervised feature selection for linear and non-linear regression of Lab color from multispectral images of meat,” Engineering Applications of Artificial Intelligence, vol. 27, pp. 211–227, 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. V. S. Kodogiannis and A. Alshejari, “An adaptive neuro-fuzzy identification model for the detection of meat spoilage,” Applied Soft Computing, vol. 23, pp. 483–497, 2014. View at Publisher · View at Google Scholar · View at Scopus
  6. Z. Shan, W. HaiBin, Z. FengBing, B. Jun et al., “Study on hyperspectral image technology based on manifold fuzzy clustering for pork quality classification,” Journal of Food Safety and Quality, vol. 6, no. 4, pp. 1421–1428, 2015. View at Google Scholar
  7. A. Przybylak, P. Ślósarz, P. Boniecki et al., “Marbling classification of lambs carcasses with the artificial neural image analysis,” in Proceedings of the Seventh International Conference on Digital Image Processing (ICDIP15), p. 963113, Los Angeles, CA, USA, April 2015.
  8. L. Ravikanth, C. B. Singh, D. S. Jayas, and N. D. White, “Classification of contaminants from wheat using near-infrared hyperspectral imaging,” Biosystems Engineering, vol. 135, pp. 73–86, 2015. View at Publisher · View at Google Scholar · View at Scopus
  9. P. Zapotoczny, P. M. Szczypiński, and T. Daszkiewicz, “Evaluation of the quality of cold meats by computer-assisted image analysis,” LWT-Food Science and Technology, vol. 67, pp. 37–49, 2016. View at Publisher · View at Google Scholar · View at Scopus
  10. A. P. A. Barbon, S. Barbon, R. G. Mantovani, E. M. Fuzyi, L. M. Peres, and A. M. Bridi, “Storage time prediction of pork by computational intelligence,” Computers and Electronics in Agriculture, vol. 127, pp. 368–375, 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. J. Qiao, N. Wang, M. Ngadi et al., “Prediction of drip-loss, pH, and color for pork using a hyperspectral imaging technique,” Meat science, vol. 76, no. 1, pp. 1–8, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. A. A. Argyri, E. Z. Panagou, P. Tarantilis, M. Polysiou, and G.-J. Nychas, “Rapid qualitative and quantitative detection of beef fillets spoilage based on Fourier transform infrared spectroscopy data and artificial neural networks,” Sensors and Actuators B: Chemical, vol. 145, no. 1, pp. 146–154, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. D. Wang, X. Wang, T. Liu, and Y. Liu, “Prediction of total viable counts on chilled pork using an electronic nose combined with support vector machine,” Meat Science, vol. 90, no. 2, pp. 373–377, 2012. View at Publisher · View at Google Scholar · View at Scopus
  14. M. Liu, M. Wang, J. Wang, and D. Li, “Comparison of random forest, support vector machine and back propagation neural network for electronic tongue data classification: Application to the recognition of orange beverage and Chinese vinegar,” Sensors and Actuators B: Chemical, vol. 177, pp. 970–980, 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. A. A. Argyri, R. M. Jarvis, D. Wedge et al., “A comparison of Raman and FT-IR spectroscopy for the prediction of meat spoilage,” Food Control, vol. 29, no. 2, pp. 461–470, 2013. View at Publisher · View at Google Scholar · View at Scopus
  16. O. S. Papadopoulou, E. Z. Panagou, F. R. Mohareb, and G.-J. E. Nychas, “Sensory and microbiological quality assessment of beef fillets using a portable electronic nose in tandem with support vector machine analysis,” Food Research International, vol. 50, no. 1, pp. 241–249, 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. M. Prevolnik, D. Andronikov, B. Žlender et al., “Classification of dry-cured hams according to the maturation time using near infrared spectra and artificial neural networks,” Meat Science, vol. 96, no. 1, pp. 14–20, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. T. Qiao, J. Ren, C. Craigie, J. Zabalza, C. Maltin, and S. Marshall, “Quantitative prediction of beef quality using visible and NIR spectroscopy with large data samples under industry conditions,” Journal of Applied Spectroscopy, vol. 82, no. 1, pp. 137–144, 2015. View at Publisher · View at Google Scholar · View at Scopus
  19. P.-C. Chang, C.-Y. Fan, and W.-Y. Dzan, “A CBR-based fuzzy decision tree approach for database classification,” Expert Systems with Applications, vol. 37, no. 1, pp. 214–225, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. R. V. Sharan and T. J. Moir, “Comparison of multiclass SVM classification techniques in an audio surveillance application under mismatched conditions,” in Proceedings of the IEEE 19th International Conference on Digital Signal Processing (DSP), pp. 83–88, Hong Kong, China, August 2014.
  21. M. Rajabi, N. Nematbakhsh, and A. Monadjemi, “A new decision tree for recognition of Persian handwritten characters,” International Journal of Computer Applications, vol. 44, no. 6, pp. 52–58, 2012. View at Publisher · View at Google Scholar
  22. I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, Morgan Kaufmann, Burlington, MA, USA, 2005.
  23. Y. Zhao and Y. Zhang, “Comparison of decision tree methods for finding active objects,” Advances in Space Research, vol. 41, no. 12, pp. 1955–1959, 2008. View at Publisher · View at Google Scholar · View at Scopus
  24. N. E. I. Karabadji, H. Seridi, I. Khelf, N. Azizi, and R. Boulkroune, “Improved decision tree construction based on attribute selection and data sampling for fault diagnosis in rotating machines,” Engineering Applications of Artificial Intelligence, vol. 35, pp. 71–83, 2014. View at Publisher · View at Google Scholar · View at Scopus
  25. L. D. C., “Article: comparative analysis of random forest, RepTree and j48 classifiers for credit risk prediction,” in Proceedings of the IJCA on International Conference on Communication, Computing and Information Technology (ICCCMIT), vol. 2014, no. 3, pp. 30–36, Chennai, India, 2015.
  26. S. Barbon Jr., A. P. A. Barbon, R. G. Mantovani, and D. F. Barbin, “Comparison of SVM and REPTree for classification of poultry quality,” in Proceedings of the Modelling, Simulation and Identification/ Intelligent Systems and Control–2016 (MSI 2016), pp. 840–039, Campinas, SP, Brazil, August 2016.
  27. C. Zhan, A. Gan, and M. Hadi, “Prediction of lane clearance time of freeway incidents using the M5P tree algorithm,” IEEE Transactions on Intelligent Transportation Systems, vol. 12, no. 4, pp. 1549–1557, 2011. View at Publisher · View at Google Scholar · View at Scopus
  28. J.-J. Zhang, M. Fang, H. Wang, and X. Li, “Dependence maximization based label space dimension reduction for multi-label classification,” Engineering Applications of Artificial Intelligence, vol. 45, pp. 453–463, 2015. View at Publisher · View at Google Scholar · View at Scopus
  29. G. H. Subramanian, J. Nosek, S. P. Raghunathan, and S. S. Kanitkar, “A comparison of the decision table and tree,” Communications of the ACM, vol. 35, no. 1, pp. 89–94, 1992. View at Publisher · View at Google Scholar · View at Scopus
  30. F. E. Tay and L. Shen, “Fault diagnosis based on rough set theory,” Engineering Applications of Artificial Intelligence, vol. 16, no. 1, pp. 39–43, 2003. View at Publisher · View at Google Scholar · View at Scopus
  31. D. F. Barbin, S. M. Mastelini, S. Barbon, G. F. Campos, A. P. A. Barbon, and M. Shimokomaki, “Digital image analyses as an alternative tool for chicken quality assessment,” Biosystems Engineering, vol. 144, pp. 85–93, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. M. Kamruzzaman, Y. Makino, and S. Oshita, “Non-invasive analytical technology for the detection of contamination, adulteration, and authenticity of meat, poultry, and fish: a review,” Analytica Chimica Acta, vol. 853, pp. 19–29, 2015. View at Publisher · View at Google Scholar · View at Scopus
  33. H. Ding, R.-J. Xu, and D. K. O. Chan, “Identification of broiler chicken meat using a visible/near-infrared spectroscopic technique,” Journal of the Science of Food and Agriculture, vol. 79, no. 11, pp. 1382–1388, 1999. View at Publisher · View at Google Scholar
  34. Y. H. B. Kim, R. D. Warner, and K. Rosenvold, “Influence of high pre-rigor temperature and fast pH fall on muscle proteins and meat quality: a review,” Animal Production Science, vol. 54, no. 4, pp. 375–395, 2014. View at Publisher · View at Google Scholar · View at Scopus
  35. R. Lawrie and D. Ledward, Lawrie Meat Science, CRC Press, New York, NY, USA, 2006.
  36. Commission Internationale de I’Eclairage–CIE, Recommendations on Uniform Color Spaces, Color-Difference Equations, Psychometric Color Terms, CIE Publication, Vienna, Austria, 1978,
  37. K. O. Honikel, “Reference methods for the assessment of physical characteristics of meat,” Meat Science, vol. 49, no. 4, pp. 447–457, 1998. View at Publisher · View at Google Scholar · View at Scopus
  38. R. Hamm and F. Deatherage, “Changes in hydration, solubility and charges of muscle proteins during heating of meat,” Journal of Food Science, vol. 25, no. 5, pp. 587–610, 1960. View at Publisher · View at Google Scholar · View at Scopus
  39. L. Yu and H. Liu, “Efficient feature selection via analysis of relevance and redundancy,” The Journal of Machine Learning Research, vol. 5, pp. 1205–1224, 2004. View at Google Scholar
  40. P. Turcinek, J. Stastny, and A. Motycka, “Usage of data mining techniques on marketing research data,” in Proceedings of the 11th WSEAS International Conference on Applied Computer and Applied Computational Science, pp. 159–164, Rovaniemi, Finland, April 2012.
  41. S. Kulkarni, Machine Learning Algorithms for Problem Solving in Computational Applications: Intelligent Techniques: Intelligent Techniques, Information Science Reference, Hershey, PA, USA, 2012,
  42. S. S. Keerthi, S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy, “Improvements to Platt’s SMO algorithm for SVM classifier design,” Neural Computation, vol. 13, no. 3, pp. 637–649, 2001. View at Publisher · View at Google Scholar · View at Scopus
  43. J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at Google Scholar
  44. B. Osborne, T. Fearn, P. Hindle, and P. Hindle, Practical NIR Spectroscopy with Applications in Food and Beverage Analysis: Longman Food Technology, Longman Scientific and Technical, Harlow, UK, 1993,