Abstract

Beef is one of the animal food products that have high nutrition because it contains carbohydrates, proteins, fats, vitamins, and minerals. Therefore, the quality of beef should be maintained so that consumers get good beef quality. Determination of beef quality is commonly conducted visually by comparing the actual beef and reference pictures of each beef class. This process presents weaknesses, as it is subjective in nature and takes a considerable amount of time. Therefore, an automated system based on image processing that is capable of determining beef quality is required. This research aims to develop an image segmentation method by processing digital images. The system designed consists of image acquisition processes with varied distance, resolution, and angle. Image segmentation is done to separate the images of fat and meat using the Otsu thresholding method. Classification was carried out using the decision tree algorithm and the best accuracies were obtained at 90% for training and 84% for testing. Once developed, this system is then embedded into the android programming. Results show that the image processing technique is capable of proper marbling score identification.

1. Introduction

Beef is one of the many produce prone to contamination by microorganism. Water and nutritional contents make an ideal medium for the growth and proliferation of microorganism [1, 2]. Contaminated beef will easily degrade and has less storage duration. Beef class is valued by two factors: its price and its quality. The quality itself is measured by four characteristics: marbling, meat color, fat color, and meat density. Specifically, marbling is the dominant parameter that determines meat’s quality [3, 4]. Determination of beef quality is commonly conducted visually by comparing the actual beef and reference pictures of each beef class. This process presents weaknesses as it is subjective in nature and takes a considerable amount of time. Therefore, an automated system based on image processing that is capable of determining beef quality is required. Some researches suggest that image processing can be applied to analyze beef color and texture that will in turn allow analysis results to be used as a reference parameter in the process of meat quality identification [36]. Furthermore, marbling grade evaluation has been conducted using the watershed algorithm and artificial neural network [7].

This research aims to develop an image segmentation using the Otsu thresholding method to separate the images of fat and meat. Researches on image processing using thresholding segmentation have been conducted before [812]. The developed algorithm is proven capable of meat quality identification based on color and texture. This system is then embedded into the android programming to enable even faster and easier use.

2. Theory

Some researches on the application of image processing for beef quality identification have been conducted earlier [46]. One of those researches tried to determine the quality of beef using texture analysis with the gray level cooccurrence matrix (GLCM) method [3]. Beef quality is categorized into 12 grades based on the amount of fat it contains. A research by Shiranita et al. shows that the GLCM method is effective in determining beef quality. Another research on the application of image texture to classify beef type yielded a correlation up to 0.8 [5]. The other research that designed the hardware and software for beef image segmentation using the vision threshold method can be used as the initial process for beef quality testing [6]. Those researches indicate that image processing based on meat texture can be applied to identify beef quality. Some other researches also prove that mobile image analysis methods based on android programming are also applicable [1315]. Among these android related researches is image recognition using android smartphones [13], basic digital image processing using android [14], and application of power consumption meter based on image processing using android smartphones [15]. Results of those three researches show that image processing can be embedded into android-based mobile devices.

Beef quality is categorized into 12 grades [3] as depicted in Figure 1. This is the result of image texture analysis using the gray level cooccurrence matrix (GLCM) method and meat type recognition process using the multi-support vector machine (MSVM) method.

2.1. Image Segmentation and Feature Extraction

The image processing starts from image segmentation consisting of two stages. The first is separating the meat and the background. This process begins with thresholding the blue canal of the RGB (Red, Green, and Blue) image using the Otsu thresholding method to obtain a binary image. Afterwards, the binary image is used as masking for object cropping. Once the object is separated from the background, the second stage of segmentation, that is, meat and fat separation, proceeds. This process itself starts by converting the RGB color space into the grayscale color. Then, the process of thresholding to separate meat and fat can be done. Otsu’s method is also one of the oldest methods of image segmentation that is treated on statistical method according to the probabilistic implementation [16]. Otsu’s method is one of the best automatic thresholding methods [17]. Basic principle of Otsu’s method is to divide image into two classes that form object and background. Automatic threshold is obtained by finding the maximum variance between two classes [9, 10]. If the is the grayscale in the image and is the probability of each level, the number of pixels with gray level is symbolized by ; thus the probability of gray level in the image of equations is given [16, 18]:If is the automatic thresholding that divides the class into two classes and [17, 19], therefore the probability distribution of the degree of gray for the two classes is So the average range for classes and isIf is an overall average of the image, therefore by adding up all the parts it became whereasTotal probability will always be the same with one, soTherefore Otsu will define variant between two classes and by the equationOptimal value threshold is the maximum value between variant classes that is shown by the following equation:

Once segmentation is done, features extraction is carried out based on two parameters of meat area and fat area. Meat area is the number of pixels it is made of, whereas fat area is the number of pixels making up the fat area. Both parameters were used as inputs for the classification algorithm that determines the marbling score.

Otsu’s method is one of the global methods looking for thresholds that minimize class variance from original image histogram, that is, background and foreground. The purpose of the Otsu thresholding method is to divide the gray level image histogram into two different regions automatically without requiring the user’s help to enter the threshold value. The approach taken by Otsu’s method is to conduct a discriminant analysis of determining a variable that can distinguish between two or more groups that arise naturally. Discriminant analysis maximizes these variables in order to split the foreground and background objects. Since the beef image samples have a large variance between background and object, the Otsu thresholding method is appropriate for a meat quality identification system compared to traditional segmentation.

2.2. Classification

The classification algorithm used in this research is decision tree algorithm using the C4.5 model. Classification is started by forming a root node followed by entropy value calculation for all data trained in the node. Parameters with maximum gain information were used as breaking nodes that make branches. Next, if each node has not yielded one class label, then entropy value calculation is repeated. However, when each node has yielded one class label, then each of these nodes will be used as the leave nodes containing decisions [2022]. Based on research [21] the decision tree based C4.5 algorithm achieved the highest classification accuracy compared with Support Vector Machines (SVM) and Multilayer Perceptron (MLP) model. Therefore in this research we used the C4.5 model for the classification of beef quality.

3. Method

The system designed in this research includes processes of image acquisition, image segmentation, features extraction, marbling score classification, and system embedding into the android programming. The diagram block for this system design is given in Figure 2.

Algorithm 1. The following is an automatic threshold using Otsu’s method and classification using decision tree as follows:(1)Start(2)Load Image(3)Calculate the probability of each level of intensity(4)Set the initial values and (5)Calculate the optimal threshold value T with different values (6)Update the value of and (7)Calculate the value of (8)Desired threshold is the maximum value of (9)Calculate meat area () and fat area ()(10)Input node parameters and (11)Calculate entropy for all parameters in the node(12)Choose parameters with maximum gain value(13)Use those parameter as breaking node that make braches(14)Each node only give one class label, if true to step (13) else step (9)(15)Node become leaves containing marbling score decisions(16)End

4. Result and Discussion

The image processing stages involved in this research consist of image acquisition, image segmentation, and system embedding into the android programming.

4.1. Image Acquisition

Results of beef image acquisition along with the marbling scores are given in Figure 3. The marbling scores (MB) in this research are 4, 5, 6, 7, and 9.

Image acquisition is conducted vertically by varying the camera distance, resolution, and angle. The varied distances are 20 cm and 30 cm. In addition, the varied resolutions are 3.2 MP, 4 MP, and 5 MP. Samples of beef image resulting from distance and resolution variations are given in Table 1.

In order to figure out the effect of angle in image acquisition, the following variations are made: 0°, 45°, 90°, 135°, 180°, 225°, 270°, 315°, and 360°, as depicted in Figure 4.

Each image from every varied combination is taken twice. Hence, as many as 540 images were obtained.

4.2. Image Segmentation

The process of image segmentation to separate meat and fat consists of two stages. First, separating the meat from the background. This process begins with extracting the blue canal of the RGB image. This extracted blue canal then undergoes the Otsu thresholding method to yield a binary image. Then, this binary image is used as a mask in the process of meat cropping, as shown in Figure 5.

Once the meat is separated from the background. The next step of image segmentation that separates the meat and fat begins. This process itself starts by converting the RGB color space into the grayscale color. Then, the process of thresholding to separate meat and fat ensues. The thresholding values set are 76 for fat and 30 for meat. This second stage of image segmentation is given in Figure 6.

Those results show that all acquired images can be properly segmented using the Otsu thresholding method.

4.3. Features Extraction

Features extraction is carried out based on the parameters of meat area and fat area. Meat area is the number of pixels that make up that meat area (Figure 6(d)), while fat area is the number of pixels from which it is made of (Figure 6(c)). The samples of features extraction results are shown in Table 2.

Both categories of extracted features are then used as inputs in the process of beef quality classification.

4.4. Classification

In this research, marbling score classification is carried out using the decision tree algorithm. The decision tree algorithm for identification of beef quality is shown in Figure 7.

The confusion matrix that resulted from that decision tree in the training process is given in Table 3.

It can be seen in Table 3 that there are nine pieces of beef data that are wrongly identified. Hence, the resulting accuracy is

The confusion matrix that resulted from the decision tree is given in Table 4. This matrix is from the testing process.

It can be seen in Table 4 that there are 14 pieces of wrongly identified beef data. Therefore, the resulting accuracy is 84%. Results from both system training and testing using decision tree algorithm are given in Table 5.

Distance and resolution variations need to be done to determine the best distance and minimum resolution required by the system to properly acquire the beef image. Variation of distance will give an impact on the image detail obtained by smartphone camera, so that the right distance will get a good image. Results from both system training and testing show that image acquisition at 30 cm gives better accuracy compared to acquisition from a 20 cm distance. Other than that, image acquisition using a 4 MP resolution camera yields better results compared to using both 3.2 MP and 5 MP resolution cameras. While the variation of angle for beef acquisition using smartphone camera has no significant effect, the process of testing data with various angle will be recognized as beef with the same quality. So it can be concluded that the acquisition image with variation of angle does not affect the beef quality identification process; it can be seen in Figure 8. So the acquisition image can be taken from various angles with the perpendicular position between the beef and the camera smartphone.

4.5. Android Smartphone Implementation

This research uses both hardware and software. The hardware utilized is a tablet with specifications given in Table 6.

Meanwhile, the software used is android studio and openCV. The process of system embedding into the android programming is shown in Figure 9.

The identification of marbling score developed in this research has been properly embedded into the android programming. We have tested the beef quality identification time as shown in Table 7. From the test results, the average identification time process was 2.84 s. Based on these results the system can identify the quality of beef quickly. Therefore, this method can be further used for future research such as in beef quality identification based on video processing systems.

5. Conclusion

Results show that the system developed in this research is capable of acquiring and segmenting beef images and identifying marbling score. The variations involved in the process of image acquisition include camera resolution, distance, and angle. The Otsu thresholding method is able to properly separate images of fat and meat. Classification was carried out using the decision tree. The resulting accuracies are 90% for the training process and 84% for the testing process. From the test results, the average identification time process was 2.84 s. This system is then embedded into the android programming as to allow further research on beef quality identification based on video processing systems.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was funded by a program from the Indonesian Directorate General of Higher Education in 2016.