Abstract

This study was focused on the multicolor space which provides a better specification of the color and size of the apple in an image. In the study, a real-time machine vision system classifying apples into four categories with respect to color and size was designed. In the analysis, different color spaces were used. As a result, 97% identification success for the red fields of the apple was obtained depending on the values of the parameter “” of CIE color space. Similarly, 94% identification success for the yellow fields was obtained depending on the values of the parameter of CIE color space. With the designed system, three kinds of apples (Golden, Starking, and Jonagold) were investigated by classifying them into four groups with respect to two parameters, color and size. Finally, 99% success rate was achieved in the analyses conducted for 595 apples.

1. Introduction

Turkey has the third place in the world after China and the USA with a 2.782.370-ton apple production [1]. The apples produced are demanded by the customers depending on their different features like type, color, size, and defects. However, one of the most important things in apple marketing is standardization. In the past, color and size classification of the apples were done by employees by just looking at them. It is hard to classify apples that are in similar colors yet in different types. However, due to the different perceptions of the employees, classification mistakes were being made. Most of the fruit classification systems built in Turkey classify fruits according to their size and mass. Machines based on mass of fruit need a mass measurement system. This type of machines can be efficiently used for 2000 kg/h. Different shape of circular materials or holey wooden staffs are used for eliminating the fruits according to their size. By using these machines a person can classify 100–200 fruits per hour. Even if centrifugal technology using machines are commonly used in our country [2], the leading actors of the sector think that it is not good enough for latest business conditions.

Recently, with the improvement of vision processing technology, machine vision has started to be used in industrial food machines which enable the classification of different types of fruits in terms of their size, color, and defects with help of high-definition cameras. Machine vision systems or vision processing techniques have been used by researchers on different types of fruits [3], such as citrus [3], cherries [4], peach [5], olive [6, 7], orange [8], pistachio [9], mushroom [10], and apple [1114].

Leemans and Destain obtained the RGB images of Jonagold variety of apples via two CCD cameras placed in different angles. By using the images obtained, they were able to classify the apples into two quality groups with a 73% success rate by the segmentation of the apple surface [11]. Li et al. placed two CCD cameras and two mirrors in such a way that the top and bottom surfaces of the Fuji variety of apples can be seen. They segmented the apple images by the developed algorithm, placing an 840 nm band-pass filter to the camera lenses. Their results show that segmentation and classification processes for an apple are 320 ms [12]. Wen and Tao obtained the images of red delicious variety of apples by using one CCD camera and a 700 nm long-pass filter on the lens. They also performed a fast blob extraction on the images by an adaptive spherical transformation. Out of total 960 samples, these samples consist of 232 good and 728 defective apples. The success rate for the identification of the good quality apples is 93.7% while it is 95.88% for defective apples [14]. Kavdir and Guyer achieved the classification of golden delicious variety of apples into three categories by using three different methods: nearest neighbor, decision tree classifier, and multilayer perception. The highest success rate with 90% was obtained by multilayer perception method [13]. In the aforementioned studies, it was focused on the defect classifications of the apples and segmentation of the apple images obtained. On the other hand, three different varieties of apples were classified into categories according to their color and size in this study. The RGB images of apples were tested in different color spaces. It was tried to obtain the color and size of the apples with the highest possible speed and accuracy. The success in this process is dependent on the right selection of the color space and their transformations.

Color is the main information source for object classification, inspection, and sorting. Color space is the mathematical representation of colors. Color spaces are formed so that they represent all colors. Color spaces are designed three dimensionally. The most frequently used color spaces are RGB (on computer monitors), YIQ, YUV or YCbCr (in video systems), and CMYK (in color printers). Since none of these color spaces are related to brightness, saturation, and intuitive perception of the color, there are additional color spaces such as HSV and HSI [15]. Color spaces can be grouped under two titles. In device dependent color spaces, features of the machine are particularly important. The device independent color spaces are color spaces developed by Commission Internationale de L.Eclairage (CIE). CIE , CIE , and CIE can be counted among device independent color spaces.

There have been several benefits of applying color space transformation that have been identified by studies on skin detection. First, the codification process is sharpened to enhance the distinctiveness between skin and nonskin classes through a certain color space transformation. Another benefit is the realization of illumination invariance, as varying illumination has its own set of challenges regarding skin detection. Finally, color space transformation presumably is able to categorize shades of different skin tones together [16]. So, HSV color spaces are used for skin color detection by Chaves-González et al. [17]. Also CIE system is suggested as the best color space for quantification in foods with curved surfaces [18] and when HSV color space transformations are combined with Support Vector Machine (SVM) classifier, 96.7% success is obtained in classification of pizza toppings [19].

There is a lot of studies in the literature on the classification of apple. In this study, with the difference of other studies in the literature, a conveyor system was developed to obtain data from different angles of each apple. The apple image was separated from the background by HSV color space component with thresholding. Apple size and color were obtained by multicolor space (HSV, CIE , and CIE color space) component with thresholding. Different color space transformations were investigated to find the best color space in the classification of apples.

2. Materials and Methods

2.1. Machine Vision System Hardware

Image acquisition device used for this research is a 1.3 mega pixel ( pixel) resolution 25 fps (frame per second) Complementary Metal Oxide Semiconductor (CMOS) camera with a manually adjustable 6 mm focus length which is mounted on camera, C-mount lens, a lighting system, 600 mm × 670 mm × 300 mm white painted diffusely illuminated tunnel with four fluorescent lamps, a conveyor belt on which fruits are placed, and automatic sorting unit. The setup of the system is shown in Figures 1 and 2. The algorithms used were developed with Matlab.

Machine vision systems are affected by the level and quality of illumination [20]. The changes in daylight level, wrong positioning of the lamp, together with the wrong choice of lamp could have negative effects on the software used in the system. Images of object illuminated with halogen lamps and fluorescent lamps are shown in Figures 3(a) and 3(b). Masking results of object illuminated with halogen lamps and fluorescent lamps are shown in Figures 4(a) and 4(b). The transition functions among the color spaces used in the software change depending on the illumination characteristic of the lamp. The space where images are collected in the machine vision system is separated from natural lighting. In this area, artificial lighting is used.

In practice, the points to be taken into consideration in the design of lighting systems are color rendering, high index, close to daylight (color temperature 5000 K−6500 K), choice of lamp whose level of light is appropriate, and proper positioning of the lamp so that all surfaces of the object receive equal levels of light. In the lighting system, 4 Philips TL-D 90 Graphica 18W/950 models are used. The features of this lamp are a color temperature of 5300 K (approximately D55) and a color-rendering index () close to 97%.

2.2. Color Spaces

Generally, color images are taken by a color camera and saved in the three-dimensional RGB (red, green, and blue) color space. Unfortunately, RGB color space is not consistently uniform, and the proximity of colors does not indicate color similarity [21]. In addition, different color space transformation may be powerful tool for object recognition.

2.2.1. RGB Color Space

An RGB color image is an array of color space pixels, where each color pixel is a triplet corresponding to the red, green, and blue components of an RGB image at a specific spatial location [22].

,, andvalues of RGB color space are the sum of respective sensitivity functions and the incoming light; whereis the light spectrum, , , and are the sensitivity functions for the , , and sensors, respectively [23].

2.2.2. HSV Color Space

HSV color space is quite closer than the RGB system to the way in which human experiences and describes color sensations [22]. HSV separates color into three components: hue, saturation, and value. (hue) distinguishes among the perceived colors, such as red, yellow, green, and blue. (value) represents the brightness of a color and (saturation) refers to how much the amount of white light mixed with a hue is [21]. Consider the following.

HSV color space values are calculated according to , , and values [22]

2.2.3. CIE Color Space

Since CIE color space takes the human eye as a basis, it forms the base for all color management systems and includes all perceivable colors. This color space is determined by CIE in 1931 according to standard illuminant (A, B, C, D50, D55, D65, E, and F) and standard observer (). The reason for this is that the most significant feature of CIE color space is being device independent.

Tristimulus values at wavelength of CIE color system [24] are where is the relative spectral power of an illuminant, , , and are color-matching functions, is the spectral reflectance at wavelength , and is a normalizing factor given by 100/.

2.2.4. CIE Color Space

In 1976, CIE suggested two color spaces (CIE and CIE ) in order to provide the structural equity space. There is a strong relation between these color spaces and a human’s visual perception. The components are “” (light), the rate from green to red “”, and that from blue to yellow “”.

Finding out the color components of CIE color space is possible with CIE color space. The observer and the standard illuminator used to discover , , and values affect these values directly.

Transformation from CIE to CIE can be achieved by the following equations [18]: , ,, and are the tristimulus values of the illuminant, in this case illuminant is D55.

2.3. Image Background Removal via Different Color Spaces

Apple area was separated from the background with thresholding. The RGB image is transformed into HSV with the “rgb2hsv” routine of the image processing toolbox of Matlab (Mathworks 2011) before applying thresholding. In addition, the RGB image is transformed to CIE and CIE with D55-compliant. The sorting process is carried out by collecting the thresholding results of HSV color space “” component, CIE color space “” component, and CIE ” component in one mask. Consider the following: where is the image obtained after the background has been removed, with being the original image and being the threshold. The threshold values based on image. The threshold values are defined by examining the histograms of the color space components. These values are fixed and added to the software after testing different apple images.

2.4. Identification of Apple Color and Size

The next process after the process of separation of the background is the determination of the color and size of the apple. Size of the apple is calculated according the pixel numbers it occupies in - and -axes in the masked area.

The color of the apple is determined according to the intensity and dispersion of the colors in the specified area. For this procedure, thresholding is used for red and yellow regions in the image. The thresholding of “” component of CIE color space shows the yellow regions in the apple picture. The thresholding of “” component of CIE color space shows the red regions in the apple picture. The information about the color and the type of the apple is gathered according to the area the yellow and red regions take up on the surface of the apple. The conversion of RGB apple picture to CIE and CIE color spaces can be applied with the help of built-in functions of Matlab. Yet, the conversion time of Matlab built-in function is too much for a real-time system. However, only one component of these color spaces is used.

2.5. RGB to CIE Conversion

The conversion from RGB color space to CIE and CIE color spaces changes depending on the illuminant character of the lamp. For this reason, a function is written considering that the color temperature of the fluorescent lamp is 5300 K (approximately D55).

Firstly, nonlinear , , and values are converted to linear and nominal values as follows:, , If (IEC 61966-2-1 std, 1999);; Else Converting RGB to CIE is performed with the help of a 3 × 3 matrix. The conversion from reference white RGB color space to average daylight (D65), CIE color space is given below [25];

2.6. From (Source Illuminated) to (Destination Illuminated) Transform

If the source differs from the illuminant of the destination, chromatic adaptations are used to decide whether the source data differs from the illuminant target RGB space (destination). Humans have a visual ability, called chromatic adaptation, to acknowledge and discount the color of the illumination and approximately recall an object’s color. Chromatic adaptation is most easily understood by viewing a white object under various illuminations. Several chromatic adaptation transformations exist in literature, most of which are based on the von Kries model [26, 27].

By using von Kries transforms, color space acquired according to D65 reference white is converted to CIE color space according to the D55 reference white used in our lighting system. Consider the following: ,, andare source reference white;,, andare the destination reference white;,, andare cone response domain. Accordig to conditions of D65 luminance, source reference values are,,, 1.0000, 1.0890 respectively. Accordig to conditions of D55 luminance, destinations reference values are,,, 1.0000, 0.92149 respectively. In calculations we assumed the standard colorimetric observer. The conversion matrix acquired via chromatic adaptation matrix () is

3. Results and Discussion

Using the designed system, ten images were obtained for the test purpose. On each image, there is one yellow and one red apple. The experimental results for the average and standard deviation values of the parameters of different color spaces for the color pixels of red and yellow apples and the background are given in Table 1. These values were calculated considering 5000 pixels for each field.

The apple images are obtained with special fluorescent lamps which have a color temperature of 5300 K (approximately D55) and a color-rendering index () that is close to 97%. These RGB images are converted to the CIE color space according to illumination condition and standard observers. It is converted to the CIE color space by taking into consideration the D55 illumination conditions in the systems using the von Kries transforms matrix. The CIE color space and components are calculated from the CIE color space by using the aforementioned formulas.

After examining the histograms of the pixels along with the color parameter values (Table 1), the parameter “” of the CIE color space was selected for the identification of the red regions. As for the identification of the yellow regions, the parameter “” of the CIE color space was selected. The threshold values of the selected color space were specified in accordance with the characteristics of the component. The negative values ofcomponent of CIE Lab color space indicate “green” while the positive values indicate “magenta” (close to red). Therefore, the threshold value was specified as 15. The component of CIE was used to expose the yellow regions by prioritizing the luminance. The threshold value was specified as 0.5 in accordance with the normalized values. Further the pixels smaller threshold values were deleted. The results show that the identification success rates for the pixels in red and yellow regions are 97% and 94%, respectively. They also show that the parameter “” of the HSV color space is a useful color space to distinguish the background and to remove the apple masks.

The size and color information is collected from the pictures acquired by the machine vision system after the place of the apple is masked in the picture. The components of color spaces used to achieve this are shown in Figure 5.

These results show that the apple size and color were basically extracted. Multicolor space is used to separate the apple from the background and to effectively determine its color and size. The determination of the position of the apple in the image is achieved considering all the thresholding results (Figures 5(a), 5(b), and 5(c)) of the “” component (Figure 6(a)) in HSV color space, “” component (Figure 6(b)) in CIE , and “” component (Figure 6(c)) in CIE color space in one mask. Threshold value is obtained using color spaces components of histograms. “” component histogram of HSV color space, “” component histogram of color space, “” component histogram of CIE color space are presented in Figures 7(a), 7(b), and 7(c), respectively. Using the thresholding results of “” component (of CIE color space) and “” component (of CIE color space), yellow and red regions are determined. Hence, apple type is obtained according to these color information.

There are no red regions on the surface of the Golden apples. Note that, when there is no red region on surface of an apple, it is classified as Golden. On the other hand, if the apple surface has only red regions or has red region and yellow region together, the apples are classified differently as Jonagold or Starking. These types of apples are determined depending on the saturation and amount of the red and yellow regions. But the color (red-yellow) and size (regular-large) are taken into consideration in the experimental results. Furthermore, the size of the apple is calculated by multiplying pixel number of the area (the apple covers) in - and -axes in the masked area, and the pixel length (0.312 mm) (Figure 8).

Moreover, for device independent color space conversions, a custom, D55-compliant function is developed and runs faster than the Matlab built-in counterpart. The RGB to CIE conversion took on average of 34.2 ms with Matlab’s built-in rgb2lab function, while the same conversion took on average of 13.4 ms with the new function (D55-compliant). This result is seen in Figure 9.

These two functions (D55-compliant and Matlab built-in) are implemented on RGB original image (Figure 10(a)) in order to obtain “” component with red color sensitive to the CIE space. Comparing histograms obtained by two different functions, similarities are seen between aggregation curves of pixels, despite the different positions of histograms (Figure 10(b)). So the color space transformation of the methods gave similar results (Figures 10(c) and 10(d)).

As a result, 595 apples including three kinds, “Starking, Jonagold, and Golden”, were examined by the system (Table 2). They were classified into four categories according to their color and size. Finally, it was determined that success rate of the system was 99%.

Speed is yet another critical aspect to consider. With the use of a custom and fast D55-compliant program, 95 apples are classified in less than a minute, but the processing time to classify apples with the developed software according to two parameters like size and color is approximately 260 ms. Therefore, the speed of automatic apple classification system can be further increased with the improvement of the sorting unit.

4. Conclusions

Some important outputs drawn from the study on the apple classification system offered are as follows. In apple classification systems, there have been some important things as the follows: (i) what kind of apples is classified, (ii) which properties are considered for the classification, and (iii) what amount of apples are classified at a given time. Having such information indicates how much the system is practicable.

Examining these kinds of studies existing in the literature, it is seen that classifications were performed with respect to mainly two parameters like the segmentation of the surface images and the surface quality. In this study, classification of apples are practically realized considering the parameters, color, and size. For this, software based color space, instead of cameras which are sensitive to the different wavelengths and filters, is utilized to have the color and size information of apples. The different color spaces were compared and the most suitable ones were selected. In the determination of the color spaces, the cameras and lighting equipment used in the system have a great importance.

In this study, in accordance with the studies in the literature, a system which places under the camera and also which lets the camera make rotary motion was offered. With such a system, multiple photos can be taken and most of the surface (of the apple) can be viewed. Otherwise, naturally processing time increases because more photos are taken. To overcome this problem, only one photo was considered to get information about the color and size of the apples.

In this experimental study, the system works on only one line. If the number of the lines is increased, more apples can be classified by the system. Since the separator of the system works in a short distance, it can not keep up with the processing rate of the software. Though the classification rate of the software for an apple is 260 ms, which corresponds to 240 apples in one minute, the number of apples in a minute, in fact, remains 96. This problem can be solved easily by using a handle belt conveyor.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported in the framework of Trakya University Scientific Research Projects under Project no. 2010/140.