Abstract

Control over water usage for irrigation purposes is a key factor in order to achieve the sustainability in agriculture. The irrigation of urban lawns represents a high percentage of the urban water usage. The use of information and communication technology (ICT) offers the possibility of monitoring the grass state in order to adjust the irrigation regime. In this paper, we propose an Arduino-based system with a camera set on a drone. The drone flies along the garden taking pictures of the grass. Those pictures are processed with a rule-based algorithm that classifies them according to the grass quality. Pictures can be tagged in three categories: high coverage, low coverage, or very low coverage. After designing our algorithm, twelve pictures are used to verify its correct operation. The results show a 100% hit rate. To analyze the suitability of using drones to perform this task, we carried out a comparative study for gardens with different sizes, where the drone and a similar system mounted on a small autonomous vehicle have been used. The results show that, for gardens bigger than 1000 m2, the use of drone is needed due to the time consumed by the vehicle to cover the entire surface. Finally, we show the results of sending the image information after processing it in different manners.

1. Introduction

Water is a scarce resource; less than 3% of the worldwide water is freshwater, and only 1% is available in rivers, lakes, and aquifers [1] and can be used for irrigation, industry, and human use. However, the increase of water consumers, the floods and droughts due to climate change, and the water pollution endanger the continuity of the current water use models. Efficiency in the use of water is nowadays crucial. The Food and Agriculture Organization of the United Nations (FAO) estimates that in 2050, there will be enough water to ensure food production for the worldwide population. However, poverty and food insecurity will remain in several regions and countries [2]. Nevertheless, water availability can diminish in some areas. For these reasons, it is necessary to promote new techniques to ensure water efficiency is maximized in as many areas as possible. The optimization of irrigation techniques is a vital process to improve sustainability and the rational use of the water in agriculture. Many techniques have been developed for different crops [3]. Most of these techniques are applied in agricultural areas. Nevertheless, urban lawns demand high amount of water, and no technique has been designed for this special issue.

We can define the urban lawns as the group of green areas in the city. These green areas include domestic or private gardens, public gardens, recreational green areas dedicated to sports, and roundabouts. Some of the urban lawns can be only formed by grass while others can have shrubs or trees. In this paper, we focused our efforts on the grass classification for its irrigation, although the shrubs and trees are irrigated by other methods. Thus, it is necessary to promote precision gardening in order to improve the sustainability in water usage. The use of precision gardening implies the use of information and communication technology (ICT) for monitoring the plots and achieving a more sustainable culture [4]. In smart cities, the monitoring of the water requirements in urban lawns can be used to define the irrigation process. Also, in these cities, many other processes are monitored and it is possible to identify the best moment to irrigate depending on the water and energy use in the grid.

Different technologies can be used for monitoring the grass. The main ones are based on remote sensing systems and the use of drones and wireless sensor networks (WSNs). The use of satellite images for remote sensing is useful for monitoring the changes in the land coverages [5]. Nevertheless, the spatial resolution of the current available images is too low. Nowadays, the highest spatial resolution is offered by the WorldView-4 satellite sensor which has a precision of 1.24 m. This satellite has a period of 4.5 days [6]. However, these characteristics do not fulfil with our needs. For a continuous grass monitoring, we need to monitor the state of the grass at least once per day. In addition, we should consider that some days, cloudy conditions may make the image not useful. Mulla [7] indicates that the use of remote sensing based on satellite images is not useful for precision gardening. The second option is the use of drones with a camera to take pictures of the whole garden. The spatial resolution will depend on the camera’s characteristics and on the flight height. In this case, the main disadvantages are that the drone is not able to fly in windy days and the legislation in some countries may limit flights in urban and inhabited areas. The use of drones for monitoring purposes is increasing, and they can be used even for emergency rescue systems [8]. The last option for grass monitoring is the use of wireless sensor networks (WSNs) with RGB sensors [9]. The system is based on one small automated vehicle (SAW) that moves along the garden taking data about the grass coverage with the RGB sensors. However, it is necessary to evaluate the time consumed to cover the garden and estimate the required time in different gardens and its viability. The use of a WSN for environmental monitoring is widely used, and many examples can be found in [10]. For our objectives, satellite remote sensing cannot be used because we need daily control and more precision than the current satellites can offer. The use of soil moisture sensors will not indicate when the grass needs to be replanted. In order to know when it is necessary to plant, we must use sensors that measure the electromagnetic radiation (cameras). The use of sensors based on fixed cameras means that many must be placed, which leads to a bigger cost. For this reason, we need to place the sensors in a vehicle. An airplane is discarded due to high cost of daily flights. Another option is the use of SAW, but this may damage the grass. Therefore, the only option left is the use of a drone.

This paper presents a smart system able to monitor the state of the grass and to decide the irrigation and planting needs. The system is capable of classifying the grass into different categories, that is, high coverage, low coverage, and very low coverage. The proposed system is composed of an Arduino node with a CMOS sensor. Our system is based on the idea developed by [9]. We will verify this system and will compare it with the proposed solution based on a drone. This proposal is part of a bigger study where the images will be locally processed by the drones, and they will only send the tag for a specific area. Thus, this paper will present the design, implementation, and verification of the drone operation and how it collects the pictures. After collecting the images, they will be processed to analyze the color composition, and finally our designed algorithm will classify them. In the next step of our study, it has been planned to add some moisture soil sensors to help us decide the irrigation regime. Our proposal will include the deployment of two moisture sensors placed at 5 meters to the east and west of each sprinkler. The number of used moisture sensors will depend on the size of the monitored area. Each pair of moisture sensors will be connected to a wireless node. The wireless node will be in charge of sending the data gathered by the moisture sensors to the base station via WiFi connection. In order to ensure that all the nodes can reach the base station, or the sink, a multihop protocol is proposed. With the moisture soil sensors, it is possible to monitor the remaining water in the soil, and with the CMOS sensor, it is possible to identify the grass coverage using the green histograms of the obtained pictures. Further studies will integrate these functions.

The main beneficiaries of the system proposed in this paper are the cities that can use this proposal to plan the irrigation of their urban lawns.

The rest of the paper is structured as follows. Section 2 shows a different work, similar to our system, as well as presents the material and methods. In Section 3, the entire proposal is detailed. The verification of the proposed system to classify the grass coverage with the camera pictures is performed in Section 4. Section 5 presents the results of our proposal applied at different garden sizes. Finally, conclusion and future work are shown in Section 6.

This section presents some of the current systems focused on monitoring gardens and crops.

Many authors proposed different solutions for monitoring the needs of gardens and crops. Firstly, we will talk about the WSN. Tripathy et al. [11] proposed a system with temperature, light, and water sensors for urban gardens. This system required the deployment of different sets of sensors for monitoring big areas, that is, to detect a small area with needs of replantation inside a bigger area. Due to this, a large number of sensors would be required.

There are some other systems that include the use of cameras along with other sensors. Macedo-Cruz et al. [12] used a picture taken with a CCD-based technology camera. These authors used a combination of three thresholding strategies (the Otsu method, the isolate algorithm, and the fuzzy thresholding method) for determining the frost damage. Lloret et al. [13] designed a WSN based on the use of cameras for detecting unusual status in the leaves of vineyards. The camera took images, and the sensor node processed them for detecting anomalies and reported them to the farmer. These studies are based on the use of cameras on the soil. Therefore, we can conclude that our system presents the same problem as we explained in the case of the WSN. For monitoring a big area, we will require the use of aerial pictures.

There are three alternatives in remote sensing; these are the use of unmanned aerial vehicles (UAVs), aircraft, and satellite images. Matese et al. [14] compared the use of UAVs, aircraft, and satellite images in vineyards. They concluded that the economic breakeven exists between 5 and 50 ha in the case of UAVs versus the other systems studied. Moreover, the different systems provided comparable results in coarse vegetation gradients and large vegetation clusters, but on the opposite situation, the satellite images fail. Torres et al. [15] used the analysis of satellite pictures captured by QuickBird. The different wavelength images are used to obtain the green vegetation indexes, near-infrared spectroscopy (NIR), normalized difference vegetation index (NDVI), panchromatic index, and ratio vegetation index (RVI). These indexes are used to characterize the size and potential of each olive. Xu et al. [16] used moderate resolution imaging spectroradiometer (MODIS) for determining the production of grasslands in China, and they measured the NDVI of different areas of China. In the case of the satellite images, this technology has different gaps. The cost of satellite images is expensive, and as we mentioned earlier, it is only economically viable in large areas. Another problem is the periodicity of images that prevents a daily control of the area that we want to monitor. In addition, the satellite image has low resolution.

To solve the problem of monitoring big areas and the needs of better resolution than satellite remote sensing, different authors proposed the uses of UAVs and aircrafts. Yang [17] designed an airborne multispectral digital imaging system. This system is based on 4 cameras that capture images in blue (430–470 nm), green (530–570 nm), red (630–670 nm), and near-infrared (NIR, 810–850 nm). The results confirmed that this system is suitable for monitoring crop pests and growing conditions, mapping invasive weeds, and assessing wetland ecosystems. Mutanga and Skidmore [18] studied the variation of N in the grass of the Kruger National Park, in South Africa. They used the images obtained from HYMAP MKII scanner (a type of spectrophotometer in an aircraft) and a neural network for classifying the images. They concluded that the 60% of variation can be explained by the image of their system.

The use of drones is currently a very popular method to obtain aerial images since it is an economical option (in areas smaller than 5 ha [14]) and easier to manage than an aircraft. Candiago et al. [19] used a drone equipped with a Tetracam ADC Micro camera for acquiring images in the red (R), green (G), and near-infrared (NIR) bands, allowing to calculate the NDVI, the green normalized difference vegetation index (GNDVI), and the soil adjusted vegetation index (SAVI). Cambra et al. [20] proposed another system with the use of drones. This system consists of a network made up of a drone and a pressure sprayer. The videos captured by drones are transferred to a PC which will perform an analysis of them through the OpenCV library. The system enables a set of sprayers in a determined area with weeds. We can observe in these cases that the UAVs can be used for monitoring an area smaller than 5 ha.

Finally, Kumar et al. [21] presented a smart autonomous gardening rover that is able to identify and classify different species of plants using extraction algorithms and a neural network. Once the plant is identified, the rover introduces its arm containing the sensors, and according to the measurements, it takes its spray water and fertilizers from this arm. In this case, the author does not use aerial vehicles. For our proposal, we cannot use terrestrial vehicles because they could damage the grass and flowers of gardens which are more sensitive than grass.

As a summary, the use of a WSN is not the best option for monitoring big areas since many sets of sensors are required to identify problems in smaller areas within. The use of cameras on soil presents the same problem because it is not possible to take pictures of big areas with the necessary resolution. For solving this problem, we can use remote sensing. Satellite imaging has important gaps, and the use of this technology is not possible in our case (low precision and high round-trip time to the same point [7]). Aircrafts present the best resolution and the periodicity of taking pictures is better than satellite. However, the cost of this alternative is very high for monitoring small areas. Finally, in this paper, we present a drone that has a camera to measure the reflectance of grass, and our algorithm allows a way of identifying those areas that present low coverage of grass or require water. Additionally, our system stores information in a database for statistical analysis and further uses.

3. Scenario and System Description

This section details the employed material, including the vegetal species and the electronic elements that compose our designed and developed sensor. The methodology followed to process the data is also presented.

3.1. Vegetal Material to Verify the Grass Coverage Classification

In this subsection, we are going to describe the vegetal material used to verify the proposed classification system [9] in the previous work.

The vegetal material has been obtained from a country estate called El Encín. This space is placed in the IMIDRA research center where the agrifood and agroenvironmental research projects of the Community of Madrid (Spain) are carried out. It is located in Alcalá de Henares, Madrid (Spain) (see Figure 1). Currently, IMIDRA is developing a study of the water demand of different grass species. The plots of these experiments are employed to find a relation between the coverage and the response of our developed device. Different combinations of grass species are used in the plots. Each plot has a surface of 1.5 m2.

3.2. Scenarios Used to Test the Developed System

This subsection shows the description of the gardens used to test the system. The aim of using different sizes of gardens is to evaluate the feasibility of using a type of system or another to monitor each garden, as well as the required energy consumption and consumed bandwidth for each scenario.

In order to test our system, four different gardens have been used. The selected gardens do not have any inclination or irregularities in the terrain. The smallest garden has a surface of 180 m2, and the biggest one has a surface of 160,000 m2. The rest of the gardens have a surface of 900, 4600, and 7000 m2, respectively. All of them are covered with only grass, that is, there are no trees nor shrubs. The gardens of 900, 4600, and 7000 m2 have a rectangular shape, and the other two gardens have an “L” shape. The selected gardens have good grass coverage in the entire area.

3.3. System for Image Capturing

In order to gather the different images of grass, we have developed a camera-based system which will be installed on the drone. The system for image capturing is composed of an Arduino module and an OV7670 camera able to take pictures with a resolution of 640 × 480 VGA. It presents a high sensitivity for low-light operation and requires a low operating voltage which makes the OV7670 camera module suitable for embedded portable applications.

Figure 2 shows a basic schematic diagram of the camera connection. The camera module works with a single +3.3 V power supply. This camera needs an external oscillator to generate the clock signal (XCLK pin) of the camera. We can select different communication protocols, although the use of the I2C protocol is recommended. Through the I2C bus, we can control and update both the pixel clock signals (PCLK) and the camera data (data (9:0)). If integrated camera modules are selected, such as MCU STM32F2 or STM32F4 series, no additional module is required. For hosts that do not have a camera interface, additional hardware is needed to store a complete file before reading them with low-speed MCUs.

The system for capturing the images of grass must be installed in a drone, so we should choose modules of small sizes but capable of performing the tasks of image capturing and processing. The final goal of our system is to perform the processing of images in the drone, while it is covering the trip. There are different devices specially designed for the development of integrated systems and Internet of Things (IoT) deployments. In our case, we are going to use an Arduino model. Arduino is an open-source platform that provides both hardware solutions and its own integrated development environment (IDE). Arduino modules are characterized by their simplicity in programming and system management. Table 1 shows a comparison of characteristics of some of its simplest and most used modules that would suit our needs. In our case, we are going to select an Arduino UNO Rev. 3 module. Arduino Uno is an electronic platform based on the ATmega328 processor. It has 14 digital pin inputs/outputs (6 analog inputs, a 16 MHz crystal oscillator). It allows programming through its USB connection and can be powered through the USB connection, from a PC or using a Li-ion battery. The reason of selecting these modules is due to its weight and price, which is the cheapest one available. This module is the second with the smallest weight of 25 g. This is important because when we work with drones, the total weight of the system impacts the flight autonomy.

Additionally, we will provide our system with an ESP-01 wireless module which can be deactivated if we do not need its use and a microSD memory module which allows us to save data and even images, if needed. Figure 3(a) shows the complete system and the main connections among them, while Figure 3(b) shows the 3D design of the support to fix the camera at the bottom part of the drone. The camera is directed towards the ground.

4. Proposal

This section presents the proposed system to gather information about the described grass. First, the sensor and the node are described; the SAV and its components are shown. Finally, the operation process of our system is detailed.

4.1. General Description of the Architecture

The proposed architecture is based on a programmed drone that crosses the field to be analyzed (see Figure 4). The path must be previously designed using software compatible with the chosen drone model.

At the same time the drone moves, it periodically takes pictures of the lawn. For each image taken, the capture system processes each image and decomposes it into its 3 RGB components. As a result of this process, 3 data matrices are obtained, one per component, with data on the red color information, green color information, and blue color information values of each pixel that form each picture. From each matrix, we can extract the histogram from which we can determine the status of that parcel. Finally, after applying our classification algorithm, each picture will be labelled as a parcel of high coverage, a parcel of low coverage plot, or a parcel of very low coverage. We can have a unique base from where the drone takes off and lands. However, to optimize the battery lifetime, we opted for a 2-base system. The first one will be the base from which the drone will take off and the second one will be the drone’s landing point.

On the other hand, to reduce battery consumption caused by data transmission and possible packet loss due to the drone movement, the data related to parcel information will be transmitted when arriving at the landing base. That is, the system will take the images and will locally process them, and after arriving to the landing base, the data will be wirelessly transmitted through a WiFi connection.

The information collected by each database will be sent to a central server located in the cloud. Finally, the owners will be able to see the status of their fields in real time.

4.2. Drone and Flight Planning

To implement our system, we have selected a commercial drone with capacity to support our small electronic device to collect the images. Table 2 summarizes the main features of some commercial models that could be used to implement our proposal. To implement our proposed system, we have selected the DJI Phantom 4 Pro which is considered as one of the widely used devices for taking aerial images for semiprofessional purposes. This model incorporates an advanced visual stereo positioning system (VPS) that allows the drone performing a precise stationary flight, even without satellite positioning, making flights easier and safer.

Although the drone can be manually controlled, to monitor the surfaces and collect the pictures, we have used a flight planning software. To plan the flight of a drone, there are several applications with support for different operating systems. In our case, we have selected free software specially designed for Android devices. DroneDeploy [22] is a software platform designed for drone flight planning. The DroneDeploy application provides a simple interface for data capture and automated flights that allows you to explore and share high-quality interactive maps directly from our mobile device. DroneDeploy allows you to generate high-resolution maps and 3D models.

DroneDeploy is compatible with several commercial drone models such as the following: (i)Mavic Pro(ii)Phantom 4 Pro(iii)Phantom 4(iv)Phantom 3 Pro(v)Phantom 3 Advanced(vi)Inspire 1 e Inspire 1 Pro(vii)Inspire(viii)Matrice 100(ix)Matrice 200(x)Matrice 600

For drones equipped with cameras, the application allows exploring interactive maps; measuring distance, area, and volume; analyzing elevation and NDVI images; and sharing maps and annotations through instant messaging applications. Figure 5 shows the example of a planned flight in a real scenario, and Figure 6 shows the drone during a flight.

4.3. Control Algorithm

To start taking measurements by the drone, we must consider that the device is going to move from coordinator node 1 to coordinator node 2 which is the one that has the possibility of transmitting the data to the cloud or to a server. It is also important to consider that the drone has to have enough autonomy to cover the entire route. Therefore, these checks must autonomously be carried out before starting the flight.

As shown in Figure 7, before starting the flight, the drone should receive the data related to the field under the study and check if its battery allows full field coverage. If its energy autonomy allows it, the drone will take off and will start taking pictures. For each image taken, the drone analyzes the image and processes it in its RGB components. After that, the drone keeps the green component and saves the results with the relative position of the extracted data from the flight plan. After taking the image, the drone checks if it has reached the end of the route and keeps moving forward for the next measurement. When the drone completes its flight, it lands on the base of the coordinator node 2. When at the base, the drone wirelessly connects to the coordinator node 2 and transmits all the data obtained from the field. After finishing its function, the drone will switch to standby mode.

On the other hand, after receiving the data of the flight plan and the size of the field to be analyzed, the drone determines if it has energy enough to complete the route. If the battery level is not high enough, the drone sends a message to the user asking for flight acceptance. If the user does not accept the flight, the drone will remain in standby mode in the base of the coordinator node 1. However, if the user accepts the flight, the drone will start flying and capturing images. After each measure, the drone checks if its autonomy is sufficient to take one more measurement and reach the coordinator node 2. As long as this condition is maintained, the path will be followed. When the condition is not kept, the drone will leave the flight plan and will directly go to the base of the coordinator node 2. After that, the drone will wirelessly connect to the coordinator node 2 and will transmit all the data obtained from the field as well as the position where it left to take measures. After finishing its operation, the drone will switch to standby mode.

4.4. Process to Analyze the Pictures

In this section, we present the verification process to apply to the system developed by [9]. In order to verify it, we used new grass plots, and using the pictures obtained with the Arduino camera, we extracted the desired values used to perform the comparison.

Different pictures were taken to the grass plots (see Figure 8(a)). After obtaining the picture, it is cut in order to extract the part related to the grass and ensure that the number of pixels of the pictures was 1500 × 1000 pixels (see Figure 8(b)). Then, the resolution of the picture was reduced to 10%. The picture has consequently 150 × 100 pixels (see Figure 8(c)).

Once we have the picture with a size of 100 × 150 pixels, we can obtain the values of brightness from each pixel. To obtain it, we use the MATLAB software (see Algorithm 1).

%Read Picture.
x=imread (picture);
GREEN= x (:,:,3);
[Rows, Columns]= size (GREEN (:,:,1));
%calcule blue histogram
  for f=1:256
    h_G(f)=0;
  end.
  for g=1:Rows.
   for h=1:Columns
    V_Green= GREEN(g,h);
    h_G(V_Green+1)= h_G(V_Green+1)+1;
   end
  end
%Vector of histogram component green.
  His_G=h_G;

An image can be understood as a matrix of row × column pixels. In order to analyze each pixel, we should go through each row, accessing each cell that represents the columns. There are several ways to do this task, but the simplest one is to use 2 nested “FOR” loops, so that the outer “FOR” loop locates the cursor at the beginning of a row and the inner “FOR” loop allows the cursor to go through all the squares of that row until reaching the last column. Finally, we created a vector of 256 positions that corresponds to the brightness levels of each color, and for each level of brightness, we counted how many pixels contain that brightness color. Finally, we saved the result in the variable His_G that is used to store the results of the histogram.

Once we have the matrix of the green component with the values of brightness, it is possible to apply the methodology described by [9]. So, firstly, we obtain the green histograms shown in Figure 8. As we can see, all the new histograms follow the trend of the mean histograms from different grass coverages. After that, we can obtain the number of pixels with brightness values between 40 and 60. We selected this range based on the results shown by [9].

Finally, since the flight height of the drone is fixed with respect of the ground, the focus of the camera is manually set before the flight.

5. Results and Discussion

This section shows the results and the discussion about the extracted values. First, the grassland classification method for analyzing pictures instead of RGB sensors is presented. Then, the results of the simulations to apply the proposed system (with drone) or our previous system (with the SAW) in 5 gardens with different sizes are evaluated. Finally, a comparison between our system and the current proposals is discussed.

5.1. Grassland Classification

In order to carry out our classification, we only need to sum the number of pixels with brightness values between 40 and 60 in the green component of the picture. Then, we will analyze the classification assigned to each picture to check if the classification process assigned the tags correctly.

After processing the pictures, the matrix with the data of green brightness is used. The pictures were not previously tagged according to their type of coverage; they are just named as new samples (NS) 1 to 12. They are named based on the summation of the pixels with brightness values between 40 and 60.

In the previous work, the plots were assigned according to three categories: high coverage (HC), low coverage (LC), and very low coverage (VLC). Figure 9 represents the obtained histograms of the NS 1 to NS 12 and the average value of the obtained histograms of HC, LC, and LVC by [9]. In solid colors, we can see the average value of the tagged histograms: HC in green, LC in orange, and VLC in red. The data from the NS 1 to NS 12 is shown with black dashes. It is possible to see that all the histograms follow the same behaviour of one of the average values from the previous work. The summation of pixels with brightness values between 40 and 60 is compared with the results obtained in the previous work [9], and the ranges of values were set to tag the different pictures. Results can be seen in Figure 10. The HC plots, with good grass coverage, have a summation lower than 500. Then, the plots named as NS 1 to NS 4 are classified as HC plots. The NS 5 to 9 have a summation lower than 1500 but higher than 500. They are classified as LC. Finally, the NS 10 to 12, which have a summation higher than 1500, are classified as VLC. Taking into account the 12 pictures under study, 4 of them were tagged as HC, 5 as LC, and 3 as VLC.

The next step is to verify if the classification was correctly done. Figure 11 shows the pictures and their classification according to our proposed algorithm. The results show that the classifications have been correctly done. The plots tagged as HC present a grass coverage of 100% (see Figures 11(a)11(d)). On the other hand, the plots classified as LC present a lower grass coverage, and most of the grass presents a yellowish color which indicates a poor grass state. Those plots (see Figures 11(f)11(i)) present an irrigation deficit. Finally, the plots tagged as VLC (see Figures 11(j)11(l)) present a very low coverage, and most of the plot has no grass, and only the brown soil is observed. In those plots, the irrigation is not immediately required. However, a seeding process will be necessary to restore the grass coverage. Thus, we can indicate that the methodology presented by [9] with RGB sensors can be used to evaluate the grass state in the picture. This is due to the fact that the operation of the sensors inside the cameras and the image postprocessing is similar to the operation of the RGB sensors.

The only limitation is that the system must operate with matrices of 100 × 150 values of brightness. However, we can divide the summation of pixels and the total number of pixels. If the result is a value lower than 0.03, the assigned category will be HC. The plots with values between 0.03 and 0.1 will be tagged as LC. Finally, the plots with values higher than 0.1 will be classified as VLC. By following this process, it is possible to apply this method with pictures of different sizes.

5.2. Study of Feasibility of Using This Method in Different Garden Sizes

In this subsection, we are going to detail the simulations of using our proposal (with a drone) in gardens of different sizes that were presented in Section 2. The results are compared with the simulation results of using a SAW. The parameters evaluated are the time required to gather the data from the entire garden and the volume of information generated. The amount of gathered data, the number of turns, and the total distance travelled are also considered for these simulations.

To calculate the number of turns (), it is necessary to divide the shorter side (SS) of the field between the width of each turn (WP) (1). On the one hand, the width of each turn with the SAW (WPSAW) is the SAW width (WISAW). Sensors are located covering the width of the vehicle (2). On the other hand, the width of each turn in the case of the drone (WPdrone) depends on the flight height (FH) and on the focal aperture of the camera (FA) (see (3)). In our examples, the WPSAW is 0.5 m and the WPDRON is 6.6 m. The area contained in each picture gathered with the drone is 4.95 × 6.6 m. The FH must be set according to the required resolution in the pictures, which, in our case, was 15 m. The values of , for each garden, are shown in Table 3. The value for the drone is much lower than the value for the SAW because they have different WP.

Once the number of turns is calculated, the next indicator is to calculate the total distance travelled to cover the field. In order to simplify the simulation, the travelled distance (TD) is calculated as the distance travelled in each turn (the number of turns along the longer side (LS)) plus the distance travelled to change from one turn to another (the number of turns minus 1 and multiplied by the width of each turn) (4). The TD for each garden can be seen in Table 3. The TD is lower when using the drone as opposed to using a SAW. The TD with the drone is lower than a tenth part than the TD with a SAW.

To complete the comparison, we need to calculate the time consumed (TC) to collect the data from each garden. The time consumed (5) is calculated as the travelled distance at the mean velocity (MV) plus the lost time (LT) in the deceleration and acceleration at the end and the beginning of each turn, multiplied by the number of turns.

There are some considerations that must be taken into account to select the mean velocity. The time that takes the SAW to gather and process each recorded data (TGD) and the area covered in each record (CA) must be considered to calculate the mean velocity of the SAW (MVSAW) (6). To calculate the mean velocity of the drone (MVDRONE) (7), we should consider the pictures per second (PPS) that the camera should take and the distance of the shortest side of each picture (SSP). The shortest side of the picture is defined as the number of pixels of the shortest side of the picture (NPSSP) multiplied by the width of each turn divided between the number of pixels of the longest side of the picture (NPLSP) (see 8). The PPS must be set by the user according to the camera features. The consumed time for each garden is shown in Figure 12. It is possible to see that the TCs with the SAW for the gardens are much higher than the TCs with the drone. In the biggest garden, the TC for the SAW is up to 180 h, while for the drone is 25 min and 30 sec. The SAW is only useful for small gardens like garden 1 and garden 2 with a TC of 0.22 h and 1.08 h, respectively. For gardens with more than 1000 m2, the SAW is not advisable due to the TC. The mean flying time of the employed drone is 30 minutes; the largest space that can be monitored by a single drone depends on the shape of the area and the number of turns needed. To give an example, a fully charged drone can cover an area of 200,000 m2 with one side of 400 m and the other of 500 m.

From this point, we will only continue with the simulation for the case of using a drone. Finally, the number of pictures (TP) can be calculated as the number of pictures per second multiplied by the total distance and divided into the mean velocity (see 9). The TP in the selected gardens are 5, 27, 139, 212, and 4909 for gardens 1 to 5, respectively. To calculate the volume of information generated if we want to send all the pictures (VIPIC), we should take into account the number of pictures and the weight (in bytes) of each picture (WPi) (see 10). However, if we want to send the green band of the picture (VIGPIC), we will transmit the matrix with the values of the green band of the picture, that is, the volume of useful data will be the third part of the volume (11). Moreover, it is possible to send only the label classification of each picture (VICPIC). That is, we will only consider the number of pictures and the weight of each category (WC) (see 12). Figure 13 shows the comparison between the VIPIC, VIGPIC, and VICPIC in each garden. As expected, the transmission of the VICPIC means lighter transmission. Sending the VIGPIC leads to a reduction of two-thirds or 66.7% of the total volume of data compared to sending the VIPIC. Sending the VICPIC supposes a reduction of 99.8% of the data volume compared to sending the VIPIC. So, taking into account our results, it is demonstrated that the best option for data transmission is to only send the label of the plot characteristics together with its plot identification or position. Finally, this label is locally calculated by our system and stored in the SD card in order to be wirelessly transmitted to the landing base. Thus, the only information transmitted from the drone to the base station is one label per gathered picture. By doing this, we are reducing the energy consumption as we do not keep the wireless connection continuously enabled. In order to know the position of each picture, we have included in the database the route of each drone. Then, it is possible to relate the label of each picture with the position of the drone according to the number of the picture. In this case, the GPS is not useful to identify the pictures due to the small distance between the drone positions.

5.3. Discussion and Comparison with Existing Systems

In this section, we are going to analyze the gaps in our system, and we will explain why our alternative is better than the existing ones.

Drone-based systems have three important issues: (I) drones cannot fly in windy conditions; (II) some countries have a more restrictive legislation in the use of drones; and (III) there is a change of environment illumination.

Regarding the first issue, the number of windy days is usually small compared to that of sunny days, although this fact depends on the geographical region. In addition, the changes in the grass are not usually so abrupt, and therefore the fact of not performing the daily monitoring is not significant. In relation to the second problem, legislation regarding the use of drones has been very restrictive because most countries did not have previous legislation and they wanted to avoid problems by limiting the use of drones. However, they are currently adapting new laws to the evolution of drones. Finally, the illumination can have negative effects on the classification of grass. The illumination can change because of (I) the sky covered with clouds; (II) the shadows of buildings, trees, and so on; (III) the time of day when the monitoring tasks are performed; and (IV) the season of year when the monitoring tasks are performed [23]. To reduce the problem with shadows, we will fly the drone in a sunny noon to reduce the size of the shadows, and in future works, we would like to include a lux meter in the drone to include this parameter in the classification algorithm as a correction factor.

Finally, we compared our system with other systems (see Table 4). The needs of irrigation can be monitored with remote sensing (satellite or airplane [24]), SAWs, smart sprinkler (WSN with weather information for calculating the evapotranspiration), and our system. Some existing solutions include sensors to detect electromagnetic radiation to determine the coverage of the vegetation. As we saw in Section 2, the NDVI, NIR, and other indicators related to the infrared can be used for monitoring the vegetation and are very common in remote sensing. In this paper, we demonstrated that the use of visible light waves can be used without the need of an infrared camera.

All systems that use electromagnetic sensors will be affected by the shadows and changes of environmental light. In the case of satellite sensing, the clouds can cover the image, and therefore it cannot be used to monitor the urban lawns. This does not happen with airplanes and drones because they fly below the clouds. Finally, remote sensing cannot be used for daily monitoring due to the low temporal resolution time, and we cannot have a schedule to take pictures on a daily basis for an urban garden. Due to these facts, we only have the option of SAWs or drones for monitoring the grass. As we have previously seen, the SAW requires a lot of time to cover a large surface, and it is not recommended for urban lawns greater than 1000 m2.

To monitor the irrigation needs, we can use the smart sprinkler (the use of remote sensing for managing irrigation is not very common). The smart sprinklers are programmed according to the moisture of the soil and the calculation of the evapotranspiration of the plants by means of the meteorological data. We decide to use moisture sensors because they are cheaper than smart sprinklers. Finally, Table 4 shows a summary of this discussion.

6. Conclusions

In this paper, the use of a drone equipped with an Arduino module and a camera for urban lawn monitoring has been evaluated. Prior to evaluating our proposal, we have used the proposed methodology to classify the grass quality based on RGB sensors explained in our previous work. The algorithm proposed in the previous paper [9] obtained the 100% of hits. Besides, we have evaluated the performance of employing a drone or a SAW to cover gardens of different sizes. The results show that for gardens bigger than 1000 m2, the use of SAW is not recommended. Finally, we compare the possibilities of sending the entire picture to be processed in a remote server, the green band of the picture, or just the category of each picture. By sending only the category of each picture instead of sending the entire picture, we obtain a reduction in the volume of information of 99.8%. The total cost of our system is €30 (not including the price of the drone). The same system could be installed in cheaper drones with lower flight autonomy but with similar results.

This proposal is part of a bigger study where the images will be locally processed by drones, and they will only send the tag for a specific area. Thus, this paper has presented the design, implementation, and verification of the drone operation and how it collects pictures. After collecting the images, they will be processed to analyze the color composition, and finally our designed algorithm will classify them. As a future work, further studies will integrate this function in the drone in order to locally process them. It is also planned to add moisture soil sensors to control the irrigation regime. The moisture sensors will be connected to a wireless node. The wireless node will be in charge of sending the data gathered to the base station. With the moisture soil sensors, it is possible to monitor the remaining water in the soil, and with the CMOS sensor, it is possible to identify the grass coverage using the green histograms of the obtained pictures. Moreover, it will be interesting to test the possibilities of detecting and classifying different plant diseases. In addition, we pretend to extend this work including the analysis of pictures of other plant species. Finally and to solve the problems related to different light conditions, we will include a light sensor in the drone and perform several tests under different conditions in order to have different ranges for different light conditions.

Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been partially supported by the “Conselleria de Educación, Investigación, Cultura y Deporte,” through the “Subvenciones para la contratación de personal investigador de carácter (Convocatoria 2017)” Grant no. ACIF/2017/069. Finally, the research leading to these results has received funding from “la Caixa” Foundation and Triptolemos Foundation.