Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2017 (2017), Article ID 7321950, 12 pages
Research Article

Comparative Analysis between LDR and HDR Images for Automatic Fruit Recognition and Counting

1School of Sciences and Technology, University of Trás-os-Montes and Alto Douro (UTAD), Quinta de Prados, 5000-801 Vila Real, Portugal
2INESC TEC Technology and Science, Campus da FEUP, 4200-465 Porto, Portugal
3Polytechnic Institute of Bragança, School of Technology and Management, Campus de Sta. Apolónia, 5300-253 Bragança, Portugal
4Agricultural School of Jundiaí, Federal University of Rio Grande do Norte (UFRN), Macaíba, RN, Brazil

Correspondence should be addressed to Tatiana M. Pinho

Received 28 April 2017; Revised 20 June 2017; Accepted 3 July 2017; Published 3 August 2017

Academic Editor: Domenico Caputo

Copyright © 2017 Tatiana M. Pinho et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Precision agriculture is gaining an increasing interest in the current farming paradigm. This new production concept relies on the use of information technology (IT) to provide a control and supervising structure that can lead to better management policies. In this framework, imaging techniques that provide visual information over the farming area play an important role in production status monitoring. As such, accurate representation of the gathered production images is a major concern, especially if those images are used in detection and classification tasks. Real scenes, observed in natural environment, present high dynamic ranges that cannot be represented by the common LDR (Low Dynamic Range) devices. However, this issue can be handled by High Dynamic Range (HDR) images since they have the ability to store luminance information similarly to the human visual system. In order to prove their advantage in image processing, a comparative analysis between LDR and HDR images, for fruits detection and counting, was carried out. The obtained results show that the use of HDR images improves the detection performance to more than 30% when compared to LDR.

1. Introduction

In agriculture, most of the performed tasks are systematic and repetitive. Many of those activities can be more efficiently executed by introducing machines into the production cycle. It is well known that the introduction of engine driven machines in agriculture is an ancient subject that can be tracked down to the dawn of the industrial revolution. Nowadays, the influence of information and data processing are significant and, once again, a new industrial revolution is arising coined as “Industry 4.0.” However, unlike the previous industrial revolution, the trend is not to add something physical, such as engines and machines, but to devise ways to use the enormous amount of information, collected during the manufacture process, in order to make the productive processes more efficient. Information is gathered by arrays of sensors scattered along the production lines and sent to centralized or decentralized data processing elements. In those computational elements, Big Data and machine learning algorithms treat the sent data and are responsible for biasing the production process towards a more desirable behaviour, for example, improvements in throughput or seeking for zero defects manufacturing.

Since agriculture faces the same challenges as any industry, if not more, it makes sense to talk about Agriculture 4.0. Indeed, and unlike the most common industries, agriculture takes place in a much larger production area and the production process occurs into a more uncontrolled and harsh environment. In any new agriculture production paradigm, the addition of automation systems is crucial to boost efficiency but, even more than this, it is the inclusion of systems that can promote the increase on information quantity and quality. The addition of information and data processing tools into the agricultural process leads to the actual concept of precision farming.

Precision farming or precision agriculture can be defined as a management strategy that uses information technology to provide data, from a set of several monitoring process points, aiming at the accurate application of measures that hopefully will increase the quality and quantity of the crop. These measures can span from application of fertilizers and herbicides to fuel management, among others [1, 2].

Frequently, precision agriculture makes use of visual information gathered from the field [3]. This information can be obtained using different means ranging from human visual inspection to satellite images and locally taken pictures, for example, images taken from the crop and then used for plantation production forecasting. Indeed, several techniques such as pictures acquired by digital cameras and multispectral images have already been used for this purpose [410]. For instance, [11] has used red/green/blue (RGB) digital cameras and machine learning techniques to identify tomato fruits in their several development stages. In [12], and in the context of detecting decay in citrus fruits, a hyperspectral system based on two liquid crystal tunable filters was proposed. Making use of different sensors, [13] presents an approach to localize and map the fruits available in a mango plantation. A literature review, on the use of machine vision systems for fruit detection and localization, is provided in [14].

Imaging techniques can be used to forecast a given crop production. For example, the grower is capable of estimating the fruit volume flow, to plan beforehand the storage space, anticipate the amount of hiring labor, and transport equipment, among many others. Besides, local management, a characteristic intrinsic to precision farming, would be easier to implement [15].

Besides the ones enumerated above, there are many other tasks in agriculture that rely on visual observations as, for example, diseases detection. Hence, in the context of precision agriculture, it will be of large interest to have computer vision systems, scattered along the production area, that can be used to obtain information on the current production status. The concept of computer vision aims to emulate tasks that the human visual system can easily perform, for example, objects recognition and tracking. In short, the science is trying to design artificial vision systems that are able to compete with the human eye: a mechanism perfected after thousands of years of natural evolution. Indeed, the human visual system is able to see a dynamic range several orders of magnitude larger than that of current existing electronic devices [16]. This issue poses some problems, for example, using typical digital image acquisition devices placed in a regular room, and by means of longer exposure times, better details about dark areas are attained. However, in this case, light areas become saturated. On the other hand, by using shorter exposure times, more detail is obtained in the whites but the darker areas lose quality. One of the ways to bypass this problem is through HDR (High Dynamic Range) images [17]. The appearance of alternative techniques, like HDR, was motivated by trying to mimic the human visual which has the ability to globally adapt to luminance of 12 orders of magnitude (locally to about 4/5 orders of magnitude) [18]. The presently existing imaging devices have a limited dynamic range, typically 2 or 3 orders of magnitude, thus being Low Dynamic Range (LDR) devices [17, 19].

Recently, HDR researches are oriented towards HDR image sensors, HDR imaging techniques, encoding methods for more efficient transmissions, and algorithms that allow the visualization of HDR images in LDR equipment [20].

An HDR image is usually obtained by merging images with different exposures [21]. In this sense, HDR images are capable of capturing the luminance of real scenes from extremely dark areas, with  cd/m2, to very bright areas, with  cd/m2 [22]. This is, nevertheless, a relatively recent area, with around a decade of existence [18, 23].

This work aims to compare the analysis of LDR and HDR images (with their respective tone mapping techniques) for the detection and counting of tree fruits in order to predict their productivity. As such, in addition to this introductory section, Section 2 gives a general description of the HDR imaging technique, and Section 3 presents some of the existing tone mapping methods in the literature. In Section 4, the problem statement and the methodology applied in the experimental part of this work are described and Section 5 presents the main results. Finally, Section 6 summarizes the main conclusions and insights to future work.

2. HDR Imaging Technique

Natural light can have intensity intervals greater than 10 orders of magnitude, and in a single scene, the contrast dynamic range can reach values of or even more than in certain situations. However, most devices can only reproduce, at most, 2-3 orders of magnitude [24]. In a regular camera, the brightness levels are only 256 (8 bits), unsuitable for actual scenes, resulting in images with very dark or very bright areas [23, 25]. It should be noticed that the dynamic range of an image, scene, or imaging device corresponds to the ratio between the highest and the lowest luminance level of a signal [17].

An HDR image stores real-scene luminance information corresponding to what the human visual system is able to see simultaneously in a scene [26]. In this sense, since HDR images hold true color and dynamic range information of the original scene, their processing, manipulation, representation, and other operations will no longer be limited to the number of bits used to describe each pixel [23]. Larger dynamic ranges allow for greater details in brighter and darker areas simultaneously [27]. Based on the images acquisition rate, an HDR camera can be classified into two types: one with no video acquisition capability, suitable only for still images, and a second type with the possibility of video recording [25].

HDR images can be obtained by hardware, using devices with multiple exposures or equipped with special sensors. Alternatively, they can be generated using computer software [28, 29]. In a common digital camera, the images are obtained by exposing the camera sensor to the incident light, that is, to the radiance, for a certain period of time (exposure time). The electric charge of the sensor increases during exposure and is subsequently converted into a digital number by an analog-to-digital converter [30]. On the other hand, in HDR based cameras, the sensors have as operation principle the separation of the light refracted by the lenses in multiple beams, which are then individually converged into a sensor placed in the beam path. Some of the elements that allow this light beams separation are semitransparent mirrors, polka-dot beam separators, dichroic cube, pellicle beam separators, or special prisms [25].

Up to now, no commercially available standard camera can extract details, of high-contrast scenes, in a single exposure. Thus, although the cameras available in the market have sufficient spatial resolution, commonly expressed in number of pixels, they lack in dynamic range or bit-depth. Hence, they have a hard time to represent the luminosity dynamic range present in real scenes [31]. An alternative will be, as previously mentioned, to capture several images of the same scene with different exposure times. This is done in a way that some images are responsible for capturing the details of the darker zones, other for the brighter zones, and the rest for acquiring images with intermediate luminosity [27]. Thus, each image of the sequence will have pixels with different properties, some regarding correctly exposed frames and others regarding sub- or overexposed frames. It is possible, by later computational processes, to eliminate the pixels too light or dark [32]. In this sense, any camera that allows capture using distinct exposure times can be used to produce HDR type quality images [33]. This multi-exposure HDR capture technique, using conventional LDR cameras, appears to be a good alternative to HDR cameras [34]. However, two main challenges must be tackled during the process of images fusion: misalignment and ghosting. The misalignment results from the overall movement of the camera giving the final image a visual feeling of blurring. This can be solved by placing the camera on a tripod or by using an image registration method [29]. On the other hand, ghosting is associated with the movement of objects in the scene that will appear at different locations in the various frames, causing artifacts to appear in the combined HDR image [28, 29, 35]. The solution of this problem is more complex than the previous since it involves the movement of outer elements [29]. In addition, the detection and correction of these artifacts can be hampered by the misalignment between different exposures and the presence of noise in the data and the estimated camera response function [35]. In this sense, extensive research has been conducted on this topic, and there are currently several methods for detecting and correcting ghosting. Some of the detection methods are based on variance, entropy, prediction, and motion compensation, among many others. Regarding ghosting removal techniques, there are methods that keep a single occurrence of the object in motion while others seek to remove all moving objects [29].

Taking into account the fact that common image representation devices, such as Liquid Crystal Displays (LCD), have a limited dynamic range, in order to directly expose HDR images on these conventional screens, it is necessary to apply luminance compression methods such as tone or gamut mapping [17, 3638]. With tone mapping, the dynamic range of the radiance maps is compressed to match the dynamic range of the display device [23]. Several tone mapping operators have been studied in recent years [26]. Section 3 will be devoted to further explore this subject. Still, there are also HDR screens that use, as backlight, an active matrix of ultrahigh-white LEDs [39].

In conclusion, the generation and visualization of HDR images involve several steps: image capture, storage, possible image processing, and visualization. Regarding this last step, an HDR image can be made visible directly on HDR screens or on conventional LDR screens after tone mapping processing. It is possible to anticipate that, in the near future, the industrial trend related to image acquisition will be oriented to the HDR. This will inevitably influence the commercialization of capture devices, such as cameras and sensors, storage methods, such as compression and coding, and reproduction media such as rendering, tone mapping, printing, and visualization [23].

3. Tone Mapping

In order to correctly reproduce an image, it is necessary to attend to the human visual system and in particular to the way it processes light information. Biologically speaking, the radiance is captured by rods and cones in the retina and then passed into the visual system. Subsequently those signals are nonlinearly processed by several layers of neurons, which form an image called percept that, in fact, does not correspond to the physical radiance of the scene [38].

Due to the wide range of lighting conditions, the visual system needs to adapt to the environment situations. An adaptation to the overall illumination occurs in the pupil according to its diameter change. On the other hand, the cones (photoreceptors) adapt their sensitivity to the average luminance of the scene. Finally, local adaptation modulates contrasts [38].

One of the tone mapping functions is to simulate the processing of the human visual system in order to make the images perceptually significant [38]. Thus, the task underlying a tone mapping is therefore to strike a balance between the emphasis of all features on an image and presenting a good contrast in the production of the LDR image [20].

Tone mapping operators can be classified into global or local tone mapping. Global techniques use a single, highly nonlinear, and spatially invariant, mapping function, while local methods use the local neighborhood around the pixel to perform the mapping [20]. That is, in global operators, the same transformation is applied to all pixels of the image, while in local operators their scales are adapted to the different zones of the image [40]. While global methods are easier to implement, they tend to lose detail. On the other hand, local methods are computationally heavier and complex, because sometimes there are parameters in the algorithms that need to be empirically defined [23].

In addition, tone mapping can still be viewed as segmentation operators, where the image is segmented into larger regions and a different tone mapping is applied to each region. Alternatively, they can be understood as frequency/gradient operators in which high and low frequencies are separated being the low frequencies processed while the high frequencies are kept original to preserve the details [16].

One of the tone mapping algorithms limitations is the need of choosing which portion of the luminance range will be represented with more realism. This option is usually taken based on the signal concentration. This may lead to saturation of the luminance values outside this range [41].

It should be noted that there is no ideal tone mapping that fits all situations. For each particular case, given the unique characteristics, different methods will be preferred, and sometimes even a combination of several methods is required [23].

The concept of tone reproduction was introduced in 1993 by Tumblin and Rushmeier [40, 42]. Since then, several methods of tone mapping operators have been proposed [43]. For example, [44] presented a new tone mapping method that incorporates models of human sensitivity to contrast, brightness, spatial acuity, and color sensitivity. The concept underlying their methodology focuses on a new histogram adjustment technique. In [45], an alternative tone mapping method was proposed aiming to preserve the image detail while the contrast is decreased. Their technique is based on the decomposition between a base layer, obtained by a nonlinear filter, called bilateral filter, and a detail layer. In order to achieve the initial goal, only the base layer is subjected to contrast reduction. Subsequently the processed layers are reconstructed to generate the mapped HDR image. On the other hand, [46] presented a tone mapping method that consists in solving a Poisson equation of a modified gradient field, which in turn is obtained by the magnitude attenuation of the higher gradients. This method is conceptually simple and easy to implement and is able to yield efficient and robust results. An alternative tone mapping algorithm, based on the techniques previously developed by Ansel Adams [47], was presented in [48]. The proposed algorithm simulates the dodging-and-burning technique used in traditional photography, using it to reproduce the local contrast of high-contrast images. The adaptive logarithmic mapping was proposed by [49] which is based on the logarithmic compression of the luminance values. The goal was to develop a high-quality, rapid tone mapping technique that is able to mimic the human response to light.

Before ending this section, it is worth noticing that this high research trend found in the domain of HDR images tone mapping is not followed when dealing with HDR videos [50]. Nevertheless, some methods have been proposed and can be found in the literature [41, 51].

4. Problem Statement

Having a good reference on the expected production volume of a given crop is fundamental to any farmer. Being able to close predict the outcome of a production epoch is important since it can lead to better resources allocation such as man/hour, storage, and transportation. This statement is transversal to any type of agricultural production. Among them, this work concerns the production of fruits, particularly oranges and lemons. In the precision agriculture frame of reference, it is fundamental to have information on the current quantity of fruits standing in the trees. Having a human operator to deal with this issue is impracticable. First, humans are not very good at dealing with repetitive tasks; second, the time and money involved to keep track of this information are significant. Hence, the development of materials and methods to automate these tasks is of paramount importance. There are several technological problems when facing the automation of fruit counting. First, if a human-like approach is used, then it is necessary to have artificial vision systems that are able to detect and recognize the fruits in a tree. Second, the detection of the visible fruits suspended in a tree is not enough. It must be complemented with mathematical modelling to estimate the total amount of fruits only by observing the outermost specimens. Besides that, electromechanical devices must be devised to fully automate the process, that is, mechatronic systems that carry the computer vision system and are able to perform the necessary computations and upload the data into a remote database.

This work deals with the first of the above enumerated problem, that is, to be able to detect and count the number of fruits by means of a computer vision process, in particular, the use of HDR type images to perform the fruits detection. Fruit detection by means of image processing techniques is not a new subject and several application examples can be found in the literature [46, 9, 14]. However, none of those approaches resort to HDR type images. This work will show that the use of HDR images outperforms, in terms of detection capability, the use of LDR images. This statement is validated by a set of comparative tests that are described along the present section.

4.1. Materials and Methods

In the current setup, the acquisition of various image exposures was carried out using a Canon EOS 5D Mark III digital camera mounted on a tripod. The Corel PaintShop Pro X6 software was used to combine the set of exposures into an HDR image. Moreover, the numerical computation software MATLAB® was used to perform digital image processing operations.

In this context, and considering Figure 1, three exposures of the same scene were obtained, namely, one with normal exposure (equivalent to the LDR image of a traditional camera), one overexposed, and another underexposed. Then, the different exposures were combined in order to produce an HDR image. Three tone mappings algorithms were applied to it: Reinhard et al. [48], Drago et al. [49], and the one embedded in the camera itself. The tone mapping operators were applied using the HDR toolbox [16] for MATLAB. The resulting four images (the LDR in conjunction with the three tone mappings HDR versions) are sent to the fruit detection and counting algorithm. The results performance is analyzed in terms of the yield, , computed bywhere represents the number of correctly identified fruits, the false positives, and the total number of fruits.

Figure 1: Analysis methodology implemented.

The algorithm for fruits detection and counting was developed based on an adaptation of the SimpleColorDetectionByHue() function (–/content/SimpleColorDetectionByHue.m), namely, based on the original image conversion to the HSV color space in which a threshold is applied, limiting the hue (H) component. After this separation, the regions with an area below a preset value are eliminated. To the segmented regions, resulting from the previous steps, a MATLAB function (imfindcircles()) is applied that allows detecting circular forms whose radii are able to change between two established values. Therefore, in order to apply the detection algorithm, it is necessary to define six values: the minimum and maximum hue component limits, the area under which the regions will be eliminated, the maximum and minimum radius of the circles to be detected, and the sensitivity of the circles detection function. A pseudocode on the complete fruits detection and counting algorithm is presented in Algorithm 1.

Algorithm 1: Fruits detection and counting algorithm pseudocode.

Although images of several situations were registered, to demonstrate the performance of the developed algorithm, three case studies were selected and described in detail, located in the district of Aveiro in Portugal. The first case study (; ) concerns an orange tree, in which the isolated tree can be observed (Figure 2(a)), the second (; ) also concerns an orange tree, but where there are several orange trees together (Figure 2(b)), and the third (; ) concerns a lemon tree (Figure 2(c)). All the images were acquired with a resolution of pixels. Tables 13 present the settings of the camera and lens used to acquire the images. Within the same case study, the parameters to be defined were kept constant in order to guarantee the reliability of the results when comparing the various input situations for the algorithm. Besides these three case studies, general results were presented concerning additional situations in order to demonstrate the increased performance of HDR images compared to the LDR.

Table 1: Settings of the camera and lens used in the first case study.
Table 2: Settings of the camera and lens used in the second case study.
Table 3: Settings of the camera and lens used in the third case study.
Figure 2: Case studies: (a) first, isolated orange tree; (b) second, orange trees together with other trees; (c) third, lemon tree.

5. Results and Discussion

In this section, the results concerning the detection and counting performance will be presented. Those results were obtained by means of a detection and counting algorithm used with two different types of images: a conventional LDR photo and the computed HDR version.

5.1. First Case

In Figure 3, it is possible to observe the different expositions for the situation presented in the first case. Figures 4(a)4(d) show the images used for the detection and counting of fruits, namely, the LDR image, HDR with tone mapping applied directly by the camera, Reinhard, and Drago, respectively. A total of 18 fruits are present in the tree.

Figure 3: Multiple exposures ((a): normal, (b): underexposed, and (c): overexposed) obtained for the first case.
Figure 4: Images obtained for the first case: (a) LDR; (b) HDR with tone mapping from the camera itself; (c) HDR with Reinhard tone mapping; (d) HDR with Drago tone mapping.

After being processed by the fruit detection and counting algorithm, the results can be observed from Figures 5 and 6. Namely, the segmented regions and the detected fruits. Table 4 shows, in numerical terms, the total number of fruits counted as well as the performance of the algorithm for the different input images.

Table 4: Results of the methodology applied for the first case in terms of fruits counted, false positives, and overall yield.
Figure 5: Regions segmented by the algorithm for the different input images of the first case: (a) LDR; (b) HDR with tone mapping from the camera itself; (c) HDR with Reinhard tone mapping; (d) HDR with Drago tone mapping.
Figure 6: Fruits detected by the algorithm for the different input images of the first case: (a) LDR; (b) HDR with tone mapping from the camera itself; (c) HDR with Reinhard tone mapping; (d) HDR with Drago tone mapping.

From the results expressed in Table 4, it is possible to conclude that any one of the HDR images presents a better performance against the LDR image, regardless of the tone mapping applied. However, it is the tone mapping applied by the camera itself that presents a better performance, with an improvement of 11.1% compared to the LDR image. This result is due to the fact that this tone mapping is perfectly calibrated for the characteristics of the camera and therefore produces an image with better quality. There were no differences in yield between the other two operators.

5.2. Second Case

Regarding the second case, the multiple exposures obtained, from which the HDR image was obtained, are presented in Figure 7.

Figure 7: Multiple exposures ((a): normal, (b): underexposed, and (c): overexposed) obtained for the second case.

Figure 8 shows the LDR and HDR images with the different tone mapping operators. The segmented regions, obtained after the detection and counting algorithm, are represented in Figure 9. The fruits detected by the method are shown in Figure 10. Table 5 shows the numerical results in terms of number of detected fruits, false positives, and yield. It should be noted that this tree has a total of fruits.

Table 5: Results of the methodology applied for the second case in terms of fruits counted, false positives, and overall yield.
Figure 8: Images obtained for the second case: (a) LDR; (b) HDR with tone mapping from the camera itself; (c) HDR with Reinhard tone mapping; (d) HDR with Drago tone mapping.
Figure 9: Regions segmented by the algorithm for the different input images of the second case: (a) LDR; (b) HDR with tone mapping from the camera itself; (c) HDR with Reinhard tone mapping; (d) HDR with Drago tone mapping.
Figure 10: Fruits detected by the algorithm for the different input images of the second case: (a) LDR; (b) HDR with tone mapping from the camera itself; (c) HDR with Reinhard tone mapping; (d) HDR with Drago tone mapping.

According to Table 5 and Figures 810, it is possible to observe that, in this case, there was a significant improvement by the application of HDR images for the detection of fruits. In particular, with respect to the LDR image, and independently of the tone mapping operator, there was an increase in yield of approximately 33.4%. This can be due to the homogeneity of the image when compared to the first case where, besides the trees, also the sky was present. Also, it must be noted that the fruits scattered along the tree have different illumination grades. Some of them are in shaded areas which make them hard to be detected by the LDR technique while HDR is able to detect them. It could also be observed that, unlike the first case, the three tone mappings considered provided similar results, leading to the same number of detected fruits.

5.3. Third Case

This third case deals with the situation of a lemon tree. The three images obtained with the camera are presented in Figure 11. Once generated the HDR image, the conversion to LDR image was obtained with different tone mapping operators. Namely, in Figure 12, are presented the LDR image obtained by the normal exposure, the image with the tone mapping applied by the camera itself, the image obtained with Reinhard tone mapping, and finally the image with Drago tone mapping.

Figure 11: Multiple exposures ((a): normal, (b): underexposed, and (c): overexposed) obtained for the third case.
Figure 12: Images obtained for the third case: (a) LDR; (b) HDR with tone mapping from the camera itself; (c) HDR with Reinhard tone mapping; (d) HDR with Drago tone mapping.

The obtained segmentation regions provided by the algorithm are presented in Figure 13. The detected fruits are depicted in Figure 14. Notice that a total of fruits can be observed.

Figure 13: Regions segmented by the algorithm for the different input images of the third case: (a) LDR; (b) HDR with tone mapping from the camera itself; (c) HDR with Reinhard tone mapping; (d) HDR with Drago tone mapping.
Figure 14: Fruits detected by the algorithm for the different input images of the third case: (a) LDR; (b) HDR with tone mapping from the camera itself; (c) HDR with Reinhard tone mapping; (d) HDR with Drago tone mapping.

Table 6 shows the numerical results obtained for this third case study, as in the previous cases, the number of counted fruits, the false positives, and the overall yield of the algorithm.

Table 6: Results of the methodology applied for the third case in terms of fruits counted, false positives, and overall yield.

Contrary to previous ones, in this case, the HDR image yield was equal to or worse than the LDR image. Nevertheless, as can be seen from Figure 14, the HDR image allowed the detection of more covert fruits and darker areas. It is also worth noting that, in the HDR image generation, the algorithm applied by the camera cuts the edges of the image, so the fruit located in the lower left corner was not visible and therefore could not be detected. If it were visible it would be expected that the algorithm counted it, as it happens with the tone mappings of Reinhard and Drago, therefore leading to a larger yield. For the remaining tone mappings, the yield was lower than the LDR, about 5.6%. One of the possible explanations for this phenomenon is the ghosting effect. It is possible to observe that the objects of the images are not perfectly aligned, blurring the resulting image and thus reducing the performance of the detection algorithm. This happens because it is a volatile environment, in which the leaves movement with the wind presents a significant disturbance. In addition, in this case, the color of the fruits is more similar to the color of the leaves. Then it is more difficult to detect them in the tree.

5.4. Additional Case Studies

As observed in the previous case studies, the tone mapping that presented better performance compared to the LDR image was the one applied by the camera itself. This tone mapping has an ideal calibration compared to the camera characteristics leading to better quality images. In order to assess the HDR image performance comparatively to LDR image, additional case studies were tested for the HDR with tone mapping from the camera itself for different fruit trees, namely, orange, lemon, and plum tree. The results concerning the orange trees are presented in Table 7.

Table 7: Results of the methodology applied to additional case studies comparing HDR with camera tone mapping and LDR image in orange trees.

From Table 7, it can be observed that, in average, the improvement of HDR images compared to LDR images in the orange fruit detection is 9.1%. A factor that has a significant impact on the HDR image improvement compared to LDR is the illumination. When the fruits, within the same image, have uniform distributed illumination levels, the LDR and HDR images have a similar performance. However, when different illumination conditions are verified, creating shaded areas, the HDR performance is better than the LDR taking advantage of the different exposures, in particular the overexposure. In some cases, the LDR image presents better results than the HDR due to the false positives in the HDR image that reduce its performance.

Besides the orange trees, also additional studies were performed regarding lemon and plum trees. In relation to the lemon tree, the improvement of HDR was, in average, 2%, verifying cases with maximum improvements of approximately 13%. In some cases, the same value of recognized and counted fruits was verified. As mentioned previously, this can be due to illumination conditions and the low number of fruits to detect, creating similar conditions to both LDR and HDR images.

Regarding the plum tree, the improvements were around 21%, being the HDR better in all the situations. The plum tree situations were acquired in conditions of low illumination. Besides that the plum fruit itself has a dark color. Compared to this the LDR image has a low performance since most fruits are hidden in the images. The HDR image takes advantage of its lighting and color enhancement and allows the detection of fruits even under most unfavorable conditions.

6. Conclusions

This work had as main objective the comparison between the analysis of fruit trees for fruit detection and counting, using LDR and HDR images, and the application of three tone mapping methods. The aim of this application was to predict the productivity of crops in order to facilitate land management and the application of precision farming techniques. Given the high dynamic ranges present in real scenes, especially in natural environments, the most accurate representation of images is increasingly sought. However, the existing conventional devices do not allow representing this diversity, and it is in this sense that HDR techniques emerge. An HDR image stores luminance information from the real scene in a similar way to what the human visual system is able to observe. Hence, the application of the HDR techniques in the context of precision agriculture can be an added value, since it allows capturing in a more reliable way the real characteristics of the scenes. In order to compare the performance of HDR images against LDR in the fruit detection and counting task, three case studies were analyzed in detail, one with an isolated orange tree, one with an orange tree surrounded by other trees, and a third with a lemon tree. For each of the cases, an algorithm developed for detection and counting of fruits was applied in four images, one LDR and three with different tone mapping operators (from the camera itself, Reinhard, and Drago). It should be noted that the detection algorithm was not the focus of the current study, but only the concept proof of HDR and LDR fruit detection comparison under the same analysis conditions. Once the results were treated and analyzed, it was concluded that the use of HDR images instead of LDR for this purpose is advantageous with performance improvements sometimes exceeding 30%. However, there are also situations where, due to phenomena such as ghosting, its yield decreases, and it is not so efficient in this task. Another important factor is the tree illumination conditions. In fact, the overall performance difference between LDR and HDR images increases when shadowing effects are presented. There were no significant differences between the tone mapping operators applied. Besides the detailed case studies, additional cases were performed for orange, lemon, and plum trees, comparing the LDR image with the HDR image with the camera tone mapping. From these results, also an improvement of the HDR image compared to the LDR was verified. Despite the reduced number of experimental tests used in this work, the results achieved demonstrate the potential benefits of the proposed methodology. As future work, it is proposed to implement this methodology in larger tree farms, with fruit in different stages of maturation, in order to predict productivity at an earlier stage of the process. In this way, it is intended to acquire images in farms using cameras installed in terrestrial and aerial drones. In addition, it is intended to optimize the fruit detection and counting algorithm, in order to eliminate the false positives sometimes observed and to automate the definition of values such as the threshold and the radius of the circles to look for. Machine learning classification techniques should be tested in this context.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


  1. X. Zhang, S. Seelan, and G. Seielstad, “Digital Northern Great Plains: A web-based system delivering near real time remote sensing data for precision agriculture,” Remote Sensing, vol. 2, no. 3, pp. 861–873, 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. D. J. Mulla, “Twenty five years of remote sensing in precision agriculture: key advances and remaining knowledge gaps,” Biosystems Engineering, vol. 114, no. 4, pp. 358–371, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. X. P. Burgos-Artizzu, A. Ribeiro, M. Guijarro, and G. Pajares, “Real-time image processing for crop/weed discrimination in maize fields,” Computers and Electronics in Agriculture, vol. 75, no. 2, pp. 337–346, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. A. D. Aggelopoulou, D. Bochtis, S. Fountas, K. C. Swain, T. A. Gemtos, and G. D. Nanos, “Yield prediction in apple orchards based on image processing,” Precision Agriculture, vol. 12, no. 3, pp. 448–456, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. Y. Song, C. A. Glasbey, G. W. Horgan, G. Polder, J. A. Dieleman, and G. W. A. M. van der Heijden, “Automatic fruit recognition and counting from multiple images,” Biosystems Engineering, vol. 118, no. 1, pp. 203–215, 2014. View at Publisher · View at Google Scholar · View at Scopus
  6. A. Payne, K. Walsh, P. Subedi, and D. Jarvis, “Estimating mango crop yield using image analysis using fruit at 'stone hardening' stage and night time imaging,” Computers and Electronics in Agriculture, vol. 100, pp. 160–167, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. D. Lorente, J. Blasco, A. J. Serrano, E. Soria-Olivas, N. Aleixos, and J. Gómez-Sanchis, “Comparison of ROC Feature Selection Method for the Detection of Decay in Citrus Fruit Using Hyperspectral Images,” Food and Bioprocess Technology, vol. 6, no. 12, pp. 3613–3619, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. E. Kelman and R. Linker, “Vision-based localisation of mature apples in tree images using convexity,” Biosystems Engineering, vol. 118, no. 1, pp. 174–185, 2014. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Jhuria, A. Kumar, and R. Borse, “Image processing for smart farming: Detection of disease and fruit grading,” in Proceedings of the 2013 IEEE 2nd International Conference on Image Information Processing, IEEE ICIIP 2013, pp. 521–526, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. A. R. Jiménez, R. Ceres, and J. L. Pons, “A vision system based on a laser range-finder applied to robotic fruit harvesting,” Machine Vision and Applications, vol. 11, no. 6, pp. 321–329, 2000. View at Publisher · View at Google Scholar · View at Scopus
  11. K. Yamamoto, W. Guo, Y. Yoshioka, and S. Ninomiya, “On plant detection of intact tomato fruits using image analysis and machine learning methods,” Sensors (Switzerland), vol. 14, no. 7, pp. 12191–12206, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. J. Gómez-Sanchis, D. Lorente, E. Soria-Olivas, N. Aleixos, S. Cubero, and J. Blasco, “Development of a Hyperspectral Computer Vision System Based on Two Liquid Crystal Tuneable Filters for Fruit Inspection. Application to Detect Citrus Fruits Decay,” Food and Bioprocess Technology, vol. 7, no. 4, pp. 1047–1056, 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Stein, S. Bargoti, and J. Underwood, “Image based mango fruit detection, localisation and yield estimation using multiple view geometry,” Sensors (Switzerland), vol. 16, no. 11, article 1915, 2016. View at Publisher · View at Google Scholar · View at Scopus
  14. A. Gongal, S. Amatya, M. Karkee, Q. Zhang, and K. Lewis, “Sensors and systems for fruit detection and localization: A review,” Computers and Electronics in Agriculture, vol. 116, pp. 8–19, 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. R. Zhou, L. Damerow, Y. Sun, and M. M. Blanke, “Using colour features of cv. ‘Gala’ apple fruits in an orchard in image processing to predict yield,” Precision Agriculture, vol. 13, no. 5, pp. 568–580, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. F. Banterle, A. Artusi, K. Debattista, and A. Chalmers, Advanced High Dynamic Range Imaging, A K Peters/CRC Press, 2011. View at Publisher · View at Google Scholar
  17. J. Duan, M. Bressan, C. Dance, and G. Qiu, “Tone-mapping high dynamic range images by novel histogram adjustment,” Pattern Recognition, vol. 43, no. 5, pp. 1847–1862, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Kuang, R. Heckaman, and M. D. Fairchild, “Evaluation of HDR tone-mapping algorithms using a high-dynamic-range display to emulate real scenes,” Journal of the Society for Information Display, vol. 18, no. 7, pp. 461–468, 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. K. Ma, Objective quality assessment and optimization for high dynamic range image tone mapping [Master, thesis], University of Waterloo, Ontario, Canada.
  20. M. Nilsson, “SMQT-based tone mapping operators for high dynamic range images,” in Proceedings of the 8th International Conference on Computer Vision Theory and Applications, VISAPP 2013, pp. 61–68, esp, February 2013. View at Scopus
  21. B. Gu, W. Li, M. Zhu, and M. Wang, “Local edge-preserving multiscale decomposition for high dynamic range image tone mapping,” IEEE Transactions on Image Processing, vol. 22, no. 1, pp. 70–79, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. A. Koz and F. Dufaux, “Optimized tone mapping with LDR image quality constraint for backward-compatible high dynamic range image and video coding,” in Proceedings of the 2013 20th IEEE International Conference on Image Processing, ICIP 2013, pp. 1762–1766, September 2013. View at Publisher · View at Google Scholar · View at Scopus
  23. Y. Bandoh, G. Qiu, M. Okuda, S. Daly, T. Aach, and O. C. Au, “Recent advances in high dynamic range imaging technology,” in Proceedings of the 2010 17th IEEE International Conference on Image Processing, ICIP 2010, pp. 3125–3128, September 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. G. Guarnieri, S. Marsi, and G. Ramponi, “High dynamic range image display with halo and clipping prevention,” IEEE Transactions on Image Processing, vol. 20, no. 5, pp. 1351–1362, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. M. Aggarwal and N. Ahuja, “Split aperture imaging for high dynamic range,” International Journal of Computer Vision, vol. 58, no. 1, pp. 7–17, 2004. View at Publisher · View at Google Scholar · View at Scopus
  26. I. Sprow, D. Kuepper, Z. Baranczuk, and P. Zolliker, “Image quality assessment using a high dynamic range display,” in Proceedings of the 12th International AIC Congress, pp. 307–310, 2013.
  27. S. E. Cox and D. T. Booth, “Shadow attenuation with high dynamic range images: Creating RGB images that allow feature classification in areas otherwise obscured by shadow or oversaturation,” Environmental Monitoring and Assessment, vol. 158, no. 1-4, pp. 231–241, 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. E. Khan, A. Akyuz, and E. Reinhard, “Ghost Removal in High Dynamic Range Images,” in Proceedings of the 2006 International Conference on Image Processing, pp. 2005–2008, Atlanta Marriott Marquis, Atlanta, GA, USA, October 2006. View at Publisher · View at Google Scholar
  29. A. Srikantha and D. Sidibé, “Ghost detection and removal for high dynamic range images: Recent advances,” Signal Processing: Image Communication, vol. 27, no. 6, pp. 650–662, 2012. View at Publisher · View at Google Scholar · View at Scopus
  30. B. Goossens, H. Luong, J. Aelterman, A. Pižurica, and W. Philips, “Reconstruction of high dynamic range images with poisson noise modeling and integrated denoising,” in Proceedings of the 2011 18th IEEE International Conference on Image Processing, ICIP 2011, pp. 3429–3432, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  31. G. Qiu, J. Guan, J. Duan, and M. Chen, “Tone mapping for HDR image using optimization - A new closed form solution,” in Proceedings of the 18th International Conference on Pattern Recognition, ICPR 2006, pp. 996–999, August 2006. View at Publisher · View at Google Scholar · View at Scopus
  32. E. Reinhard, G. Ward, S. Pattanaik, P. Debevec, W. Heidrich, and K. Myszkowski, High dynamic range imaging, acquisition, display and image-based lighting, Morgan Kaufmann, Burlington, Mass, USA, 2nd edition.
  33. M. N. Inanici, “Evaluation of high dynamic range photography as a luminance data acquisition system,” Lighting Research and Technology, vol. 38, no. 2, pp. 123–136, 2006. View at Publisher · View at Google Scholar · View at Scopus
  34. A. Tomaszeweska and R. Mantiuk, “Image registration for multi-exposure high dynamic range image acquisition,” Vclav Skala UNION Agency, pp. 1–8, 2007. View at Google Scholar
  35. O. Gallo, N. Gelfand, W.-C. Chen, M. Tico, and K. Pulli, “Artifact-free high dynamic range imaging,” in Proceedings of the 2009 IEEE International Conference on Computational Photography, ICCP 09, April 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. R. Mantiuk, S. Daly, K. Myszkowski, and H.-P. Seidel, “Predicting visible differences in high dynamic range images model and its calibration,” in Proceedings of SPIE-IS and T Electronic Imaging - Human Vision and Electronic Imaging X, pp. 204–214, January 2005. View at Publisher · View at Google Scholar · View at Scopus
  37. R. Mantiuk, K. Myszkowski, and H.-P. Seidel, “Visible difference predicator for high dynamic range images,” in Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics, SMC 2004, pp. 2763–2769, October 2004. View at Scopus
  38. L. Meylan, Tone mapping for high dynamic range images, Thèse du Grade de Docteur ès Sciences, École Polytechnique Fédérale de Lausanne.
  39. H. Seetzen, L. A. Whitehead, and G. Ward, “A High Dynamic Range Display Using Low and High Resolution Modulators,” SID Symposium Digest of Technical Papers, vol. 34, no. 1, pp. 1450–1453, 2003. View at Publisher · View at Google Scholar
  40. A. Yoshida, V. Blanz, K. Myszkowski, and H.-P. Seidel, “Perceptual evaluation of tone mapping operators with real-world scenes,” Electronic Imaging, pp. 192–203, 2005. View at Publisher · View at Google Scholar · View at Scopus
  41. P. Lauga, A. Koz, G. Valenzise, and F. Dufaux, “Region-based tone mapping for efficient High Dynamic Range video coding,” in Proceedings of the 4th European Workshop on Visual Information Processing, EUVIP 2013, pp. 208–213, IEEE, France, June 2013. View at Scopus
  42. J. Tumblin and H. Rushmeier, “Tone reproduction for realistic images,” IEEE Computer Graphics & Applications, vol. 13, no. 6, pp. 42–48, 1993. View at Google Scholar
  43. M. Čadík, M. Nimmer, L. Neumann, and A. Artusi, Image attributes and quality for evaluation of tone mapping operators, National Taiwan University, 2006.
  44. G. W. Larson, H. Rushmeier, and C. Piatko, “A visibility matching tone reproduction operator for high dynamic range scenes,” IEEE Transactions on Visualization and Computer Graphics, vol. 3, no. 4, pp. 291–306, 1997. View at Publisher · View at Google Scholar · View at Scopus
  45. F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” ACM Siggraph, vol. 21, no. 3, pp. 257–266, 2002. View at Publisher · View at Google Scholar
  46. R. Fattal, D. Lischinski, and M. Werman, “Gradient domain high dynamic range compression,” in Proceedings of the ACM Transactions on Graphics; Proceedings of ACM SIGGRAPH 2002, pp. 249–256, July 2002. View at Scopus
  47. A. Adams, The Print, The Ansel Adams Photography Series, Little, Brown and Company, 1983.
  48. E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” in Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '02, pp. 267–276, July 2002. View at Publisher · View at Google Scholar · View at Scopus
  49. F. Drago, K. Myszkowski, T. Annen, and N. Chiba, “Adaptive Logarithmic Mapping for Displaying High Contrast Scenes,” Computer Graphics Forum, vol. 22, no. 3, pp. 419–426, 2003. View at Publisher · View at Google Scholar · View at Scopus
  50. G. Eilertsen, R. Wanat, R. K. Mantiuk, and J. Unger, “Evaluation of tone mapping operators for HDR-video,” Computer Graphics Forum, vol. 32, no. 7, pp. 275–284, 2013. View at Publisher · View at Google Scholar · View at Scopus
  51. Z. Mai, H. Mansour, R. Mantiuk, P. Nasiopoulos, R. Ward, and W. Heidrich, “On-the-fly tone mapping for backward-compatible high dynamic range image/video compression,” in Proceedings of the 2010 IEEE International Symposium on Circuits and Systems: Nano-Bio Circuit Fabrics and Systems, ISCAS 2010, pp. 1831–1834, June 2010. View at Publisher · View at Google Scholar · View at Scopus