Journal of Robotics

Journal of Robotics / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6633139 | https://doi.org/10.1155/2021/6633139

Le Zhang, Rui Li, Zhiqiang Li, Yuyao Meng, Jinxin Liang, Leiyang Fu, Xiu Jin, Shaowen Li, "A Quadratic Traversal Algorithm of Shortest Weeding Path Planning for Agricultural Mobile Robots in Cornfield", Journal of Robotics, vol. 2021, Article ID 6633139, 19 pages, 2021. https://doi.org/10.1155/2021/6633139

A Quadratic Traversal Algorithm of Shortest Weeding Path Planning for Agricultural Mobile Robots in Cornfield

Academic Editor: Changsheng Li
Received18 Nov 2020
Revised15 Jan 2021
Accepted04 Feb 2021
Published20 Feb 2021

Abstract

In order to improve the weeding efficiency and protect farm crops, accurate and fast weeds removal guidance to agricultural mobile robots is an utmost important topic. Based on this motivation, we propose a time-efficient quadratic traversal algorithm for the removal guidance of weeds around the recognized corn in the field. To recognize the weeds and corns, a Faster R-CNN neural network is implemented in real-time recognition. Then, an ultra-green characterization (EXG) hyperparameter is used for grayscale image processing. An improved OTSU (IOTSU) algorithm is proposed to accurately generate and optimize the binary image. Compared to the traditional OTSU algorithm, the improved OTSU algorithm effectively shortens the search speed of the algorithm and reduces the calculation processing time by compressing the range of the search grayscale range. Finally, based on the contour of the target plants and the Canny edge detection operator, the shortest weeding path guidance can be calculated by the proposed quadratic traversal algorithm. The experimental results proved that our search success rate can reach 90.0% on the testing date. This result ensured the accurate selection of the target 2D coordinates in the pixel coordinate system. Transforming the target 2D coordinate point in the pixel coordinate system into the 3D coordinate point in the camera coordinate system as well as using a depth camera can achieve multitarget depth ranging and path planning for an optimized weeding path.

1. Introduction

In order to achieve the green and pollution-free growth of the whole life cycle of field crops and the sustainable development of agriculture, many scientific researchers focus on the field of fully automatic weeding by weeding robots [14]. The emergence and use of agricultural mobile robots [59] not only can replace humans to complete dull and repetitive agricultural work but also can efficiently and continuously work in different outdoor environments. The use of robotic weeding techniques can also improve production efficiency and effectively liberate the human’s hands. Therefore, under the natural growth environment conditions, accurate and rapid identification and removal of weeds in field crops play an important role in achieving intelligent field management [1014].

Until now, many researchers have carried out specific research studies on the removal of weeds in field crops. Maruyama and Naruse developed a small weeding robot for rice fields [15]. They proposed an approach of moving multiple robots around a field to prevent weed seeds from sprouting. But the small weeding robot can only be used to prevent the seeds germination of weeds in rice fields. Nan et al. proposed a machine-vision-based method to locate crops by providing real-time positional information of crop plants for a mechanical intrarow weeding robot [16]. This method is only used to remove weeds in a certain area. Zhang et al. proposed a navigation method for weeding robots based on the smallest univalue segment assimilating nucleus (SUSAN) corner and improved sequential clustering algorithm [17] and does not involve the removal of weeds in paddy fields. Malavaz et al. developed a general and robust approach for autonomous robot navigation inside a crop by using light detection and ranging (LiDAR) data [18]. The approach can only detect the distribution of weeds by LiDAR and cannot perform ranging. Gokul et al. developed a trainable, automatic robot which helps to remove unwanted weed on agricultural fields by using a gesture to control based on three-axis robotic arm to do the necessary work [19]. This study only introduces the design of the robot and does not involve the specific identification and location of weeds. Chechliński et al. solved the bottleneck problem in terms of the deployment of deep network models on low-cost computers, the bottleneck problem above, which has laid a research foundation for further transplantation into agricultural robot tasks [20]. The core of this study is the study of deep network models, rather than solving the problem of how to remove weeds in the field. Kanagasingham et al. attempted to integrate GNSS, compass, and machine vision into a rice field weeding robot to achieve fully autonomous navigation for the weeding operation [21]. A novel crop row detection algorithm was developed to extract the four immediate rows spanned by a camera installed at the front of the robot. The algorithm cannot accurately locate weeds in rice row.

For robotic path planning, visual detection methods and other sensors-based detection methods are widely implemented. Choi et al. found that the guidance line extracted from an image of a rice row had precisely guided a robot for weed control in paddy fields and proposed a new guidance line extraction algorithm to improve the navigation accuracy of the weeding robots in paddy fields [22]. Hossain and Ferdous et al. developed a new algorithm based on the bacterial foraging optimization (BFO) technique [23]. They also explored the application of BFO in terms of the mobile robot navigation to determine the shortest feasible path to move from any current position to the target position in an unknown environment with moving obstacles. Contrerascruz et al. proposed an evolutionary approach to solve the mobile robot path planning [24]. The proposed approach combines the artificial bee colony algorithm as a local search procedure and the evolutionary programming algorithm to refine the feasible path detected by a set of local procedures. The methods they proposed above are mainly used for robotic path planning, and most of them did not consider that the removal of weeds also requires path planning.

Andújar et al. proposed a method for estimating the volume of weeds by using a depth camera [25]. Reconstructing a 3D point cloud of corn crops infested with weeds in the field and using the Kinect device to estimate volume can determine the crop status and achieve accurate estimates of weed height. Bakhshipour and Jafari used the morphological characteristics of crops to evaluate the application of the support vector machine (SVM) and artificial neural network in weed detection [26]. Through experimental comparison, it was concluded that the method based on SVM can be better used for weed detection. Xu et al. developed a real-time weed positioning and variable speed herbicide spraying (VRHS) system for row crops [27]. They proposed an improved particle swarm optimization (IPSO) algorithm for wild cornfield weed images segmentation, which optimizes the traditional particle swarm optimization algorithm to meet the real-time data processing needs of field management. The abovementioned researchers have realized the detection and segmentation of weeds through traditional machine learning methods. However, they failed to provide a way to accurately locate weeds.

In order to reduce route length and operation time, Ya et al. proposed a new path-planning algorithm for static weeding [28]. At the same time, to demonstrate the feasibility and improve the implementation of laser weeding, a prototype robot was built and equipped with machine vision and gimbal mounted laser pointers camera. Liu et al. designed an on-site imaging spectrometer system to distinguish crop and weed targets [29]. The use of a limited number of spectral bands can achieve multicategory distinction between weeds or between crops and weeds. In general, some of the algorithms or specific systems developed above have achieved effective experimental results, but none of them have qualitatively analysed and measured the distance between crops and weeds as well as between weeds and weeds. In addition, the path planning guidance of the weed removal for protecting target crops was missing from above research.

To solve the above problems as well as provide an efficient and accurate weeds removing guidance, this study proposes an efficient quadratic traversal algorithm for the field weeding robot. Through the combination of deep learning technology and traditional algorithms, a depth camera is used to accurately measure the distance of corn and weeds. And then, a shortest weeding path around the crops is further planned for an efficient weeding process. The proposed method provides a better way to assist intelligent agricultural mobile robots to precisely perform weeding operations [3033]. The implementation of our method can facilitate the intelligent weeding robot to better perform precise weeding operations and improve the robotic working efficiency.

3. System Framework

Figure 1 shows an overview of the system framework of agricultural mobile robots for cornfield weeding. The detailed function introduction of the proposed system is as follows.

The depth camera is used to obtain real-time images from the video stream as RGB color images. It is further used to achieve multitarget depth ranging and path planning for an optimized weeding path. The data preprocessing mainly includes target recognition and processing of grayscale image. Among them, target recognition is used for corn and weed images target recognition and automatic cutting, and the processing of grayscale image is based on the EXG method to achieve grayscale image under the RGB color space. On the basis of ensuring the performance of the algorithm, the improved OTSU algorithm can effectively reduce the calculation cost by compressing the range of the search grayscale range. The edge detection based on Canny can realize the contour extraction of corn and weed targets. The quadratic traversal algorithm includes two parts: the first traversal and the second traversal. The first traversal is used to extract the specified area in the contour edge image. The second traversal is used to determine the corresponding 2D coordinate information extracted on the keyframe colorful image from the video stream. Depth ranging and shortest path planning can verify the feasibility of our proposed method and achieve the experimental goals.

4. Image Processing Method and Quadratic Traversal Algorithm

4.1. Data Preprocessing Method

Under natural conditions in the field, in order to achieve automatic recognition of corn and weed, a Faster R-CNN deep neural network based on the VGG-16 feature extraction network [3436] is trained on the collected corn and weed image data, so as to obtain a deep network model for automatically identifying corn and weed targets.

The depth camera (RealSense D435i) needs to be turned on to obtain RGB images and depth information images aligned with it, and then, keyframe images (640480) need to be extracted from the video stream as an RGB image. The RGB image is compressed into a size of 500400 and imported into a deep network model for target recognition. The calculated corn-weed target recognition result is further used to facilitate subsequent image automatic cutting.

Before grayscale image processing, we need to obtain the number of corn and weed target images in database A and database B. Then, record the original length and width of the target image after cutting. After that, it is zoomed out to a target size. Here, we use 640480 pixels. Finally, the zoomed image is cut (the center of the cut is 500400 pixels). The purpose of it is to retain the main information of the target image. At this point, the preparation of grayscale image processing data is completed.

By observing the cropped images of corn and weed targets, it is not difficult to find the color difference among the corn-weed targets and the soil background, which is easy to distinguish and identify by traditional algorithms. According to the value of the target image in the three color channels of R, G, and B, these are obviously different. In this study, by studying the linear combination of the characteristics of the three color components of R, G, and B, the green area of the target image can be better extracted. In order to select the best method for extracting the green area of the target image, the EXG method, the GMR method for the RGB color space, and the Cg method for the YCrCb color space [3739] are implemented. The detailed comparison and analysis are performed in Section 6.1. An ultra-green characterization (EXG) hyperparameter is used for grayscale images. The specific formula is as follows:

We can obtain the maximum () and minimum () values in the array, and the formula (2) can be used to convert the array into an array for subsequent optimal segmentation threshold selection.

4.2. Improved OTSU Algorithm

The OTSU algorithm, which can automatically calculate the segmentation threshold, has been widely used in the field of agricultural image processing [40, 41]. Through the leveraging of the OTSU features, we propose an improved OTSU (IOTSU) algorithm. On the basis of ensuring the performance of the algorithm, the IOTSU algorithm effectively improves the search speed and reduces the calculation processing time by compressing the range of the search grayscale range. The following steps specifically summarize the detailed contents of the IOTSU algorithm:(a)For an imported grayscale image, equations (3) and (4) are used to complete the parameter definition and initialization:where is the ratio of the total number of pixels of in the foreground target image for the total number of pixels in the entire image, is the ratio of the total number of pixels in the background image for the total number of pixels in the entire image; indicates the total number of pixels in the image whose grayscale value is less than the foreground and background segmentation threshold , indicates the total number of pixels in the image whose grayscale value is more than the foreground and background segmentation threshold ; and is the total number of pixels in the entire image.(b)The relationship expressions (5) and (6) can be obtained from equations (3) and (4) and the inherent relationship of the parameters(c) is the average gray level of all pixels of the input grayscale image. Then, the average gray level of the pixels of the foreground target image and the average gray level of the pixels of the background image can be calculated byIn addition, there is a linear correlation among , , and . The linear relationship expression is given in equation (8):(d)According to the equation in the above steps, a maximum variance between clusters can be obtained by equation (9). By putting equation (8) into equation (9) and simplifying it, the corresponding equivalent equation is given as equation (10):(e)Traverse the compressed grayscale interval to obtain the segmentation threshold when the variance between classes is maximum. is the optimal segmentation threshold. First, we need to obtain the average gray level in step (c). Second, obtain the minimum gray value and the maximum grayscale value in the grayscale interval of the grayscale image. Finally, in the grayscale interval (, ), the golden section points on the left and right sides of the average gray level are used to as the compressed grayscale interval (, ).(f)The equivalent equation (10) is used to traverse the grayscale interval (, ). Obtain the segmentation threshold of the foreground target image and the background image that maximized the variance between the classes. According to the obtained segmentation threshold , use equation (11) to generate the binary image for the imported grayscale image.where as a substitute parameter represents the maximum value in the grayscale interval, and the value is 255; is the grayscale value of the pixels of the grayscale image, and is the binary image generated by equation (11).(g)For the generated binary image, we first perform an area threshold filtering operation. The purpose of it is to remove the background image that wrongly divided into the foreground target image. Then, the Gaussian filtering is performed to remove the noise information in the binary image. Finally, morphological operations are performed to smooth the binary image for obtaining an optimized binary image, which is used for subsequent image edge detection.

4.3. Edge Detection Algorithm Based on Canny

Complete and effective edge contour image information plays an important role in studying the characteristics of corn and weed targets in the field. It is also convenient for the corn and weed targets to accurately select the 2D coordinate points in the subsequent study. Generally, in order to extract more complete and effective edge contour image information as much as possible, an appropriate edge detection algorithm can be selected by studying the extraction effect of the edge detection operator [42]. In this study, the binary image after optimization of the corn and weed targets is taken as the research object. We first compare two kinds of second-order edge detection operators: Canny operator and Laplacian operator, and then, we compare three kinds of first-order edge detection operators: Sobel operator, Roberts operator, and Prewitt operator.

4.4. Quadratic Traversal Algorithm

In order to realize an accurate selection of the 2D coordinate points of the corn and weed targets in the field crop image, the edge contour image of the corn and weed targets is taken as the research object. In this study, a quadratic traversal algorithm is proposed for selecting target 2D coordinate points in the pixel coordinate system, and the corresponding traversal search box is designed. The algorithm’ main implementation steps are as follows:S1. Define a row step size is pixels, a column step size is pixels, and a traversal search box size is . Calculate the number of row direction traversal search boxes and the number of column direction traversal search boxes in the target contour edge image of size . Here, the value of is the number of search boxes walked in each row when the traversal search box advances in the row direction; the value of is the number of search boxes walked in each column when the traversal search box advances in the column direction.S2. The traversal search box is used to traverse the edge contour image of corn and weed objects according to the priority row traversal method. For the traversal search box, store the number of pixels that meet the set conditions in the database C in sequence. Where, the setting conditions are that the values of R, G, and B are all more than 250. At the same time, record the corresponding serial number of the traversal search box.S3. Obtain the serial number corresponding to the traversal search box with the largest number of pixels that meets the set conditions in the database C. It should be noted that the serial number is a positive integer counting from one.S4. Calculate the position information of the traversal search box on the edge contour image of the target by using the corresponding serial number of the traversal search box. Here, and are the natural numbers counting from zero, which are used to represent the row and column position information of the traversal search box on the edge contour image of the target. The specific calculation equation isS5. Calculate the 2D coordinate points information of the upper left ends and lower right ends of the traversal search box by using the row and column position information. The specific calculation equation isS6. According to the 2D coordinate points information of the upper left ends and lower right ends of the traversal search box, the specified area () on the edge contour image of the target can be implemented.S7. Select the pixel coordinates that meet the set conditions in the traversal search box. The pixel coordinates are fixed as close as possible to the center of the traversal search box of the second traversal and then back to the corresponding 2D coordinate points information on the keyframe image. The specific calculation equation is

In order to further describe our core work for the quadratic traversal algorithm in detail, as shown in Figure 2, taking corn in the image of field crops as an example, the detailed explanation of each step of the quadratic traversal algorithm is shown.

Among them, Figure 2(a) shows the preparation processing of the data preprocessing part, and the corresponding key information is marked on the image. Here, the lower ends of Figure 2(a) are the corn-weed target recognition result image, and the upper ends of Figure 2(a) are the processed result of cutting the corn target from the target recognition result image. Figure 2(b) introduces the core steps of the quadratic traversal algorithm and annotates the corresponding location information. Here, the lower ends of Figure 2(b) are enlarged images for a local area with a size of in the traversal search box. The pixel coordinates are located in the coordinate system, and the upper left ends and lower right ends of the traversal search box are located in the or coordinate system. The upper ends of Figure 2(b) are reserved for the edge contour image of the corn target with the main information area. Figure 2(c) demonstrates the result of using equation (16) to calculate the pixel coordinates back to the corresponding 2D coordinate information on the keyframe image. The 2D coordinate information is located in the coordinate system.

4.5. Depth Ranging and Shortest Path Planning

The accurate selection of the 2D coordinate point of the corn and weed targets is obtained in the cropped image. Then, the target distance measurement and the shortest weeding path planning work are required. There are generally four coordinate systems in the field of computer vision. They are the pixel coordinate system, imaging coordinate system, camera coordinate system, and world coordinate system. Generally, the depth camera ranging research work is implemented in the camera coordinate system. Its core content is to transform the target 2D coordinate point in the pixel coordinate system into the 3D coordinate point in the camera coordinate system.

However, the 2D coordinate point information is the coordinate in the pixel coordinate system, and the corresponding 3D coordinate point information in the camera coordinate system needs to be generated through coordinate system conversion. At this time, the acquisition of camera internal parameters needs to be completed by the depth camera calibration [43, 44]. represents the principal point coordinates in the imaging coordinate system, which is used to realize the conversion between the pixel coordinate system and the imaging coordinate system. and represents the focal length of the depth camera, which is used to realize the conversion between the imaging coordinate system and the camera coordinate system.

Next, it is necessary to obtain depth information for the 2D coordinate point of the target in the depth image aligned with the colored image and obtain the ratio of the depth pixel to the real unit . In the end, the conversion from the pixel coordinate system to the camera coordinate system can be directly completed by equation (17). So that, the target 2D coordinate point in the pixel coordinate system can be transformed into the 3D coordinate point in the camera coordinate system. By using the distance formula between two points in the 3D space, the distance between the corn target and the weed target as well as the distance between the weed target and the weed target can be implemented.

Based on realizing the multiobject depth ranging research works, we take the corn crop target as the starting position in the shortest weeding path planning research. Using the Dijkstra algorithm for the shortest weeding path planning can achieve excellent experimental result [45]. Figure 3 shows a detailed diagram of the Dijkstra algorithm.

5. Experimental Setup

5.1. Cornfield Mobile Robotics Platform

GPS is the abbreviation of the global positioning system. It is an omnidirectional, all-weather, all-time, high-precision satellite navigation system that can provide global users with low-cost, high-precision three-dimensional position, speed, and precise timing navigation information. Lidar (VLP-16) is responsible for constructing a real-time 2D or 3D navigation map of the cornfield at close range and providing real-time 3D point cloud information around it, which can further provide precise navigation information for the cornfield mobile robotics platform. The depth camera as an RGB-D camera has a pair of left-eye and right-eye stereo infrared cameras, infrared dot-matrix laser emitters, and RGB cameras [46, 47]. The size is 90 mm  25 mm  25 mm, suitable for indoor and outdoor environments. The depth camera is based on the triangulation method for binocular stereo distance measurement, in which a pair of stereo infrared cameras is used to collect depth information of the target, and an infrared dot-matrix laser emitter is used to project certain structural features of light on the target in the visual scene. RGB camera is used to collect color image data and can achieve color image video stream and depth image video stream alignment. The maximum distance can be up to 10 meters. It is widely used in research fields such as drones, robots, and AR/VR.

Universal robots (UR5), as a collaborative robotic arm, have six rotary joints (degrees of freedom) and can perform automated tasks with a maximum load of 5 kg. Its effective working radius is up to 850 mm. Robotic mobile base (Husky A200) is used as the mobile carrier of the cornfield mobile robotics platform. It uses four-wheel drive to work. The maximum payload is 75 Kg, and the maximum speed can reach 1 m/s. As a high-performance processing unit, the workstation is essentially an industrial personal computer. On the one hand, it is used to deploy the algorithm program we designed. On the other hand, it is used to communicate with the abovementioned key devices.

On a workstation with Windows 10 operating system, run the supporting software development program (Intel RealSense SDK 2.0) based on RealSense D435i and compile and generate image acquisition and target ranging software again. The image acquisition and target ranging software includes the driver of the RealSense D435i depth camera, which will allow the depth camera to collect image depth information and RGB information at a rate of not less than 20 frames per second (fps). At the same time, the pixels of the collected image are processed to 640  480. Its purpose is to measure the distance between corn and weeds and between weeds and weeds based on the currently acquired image depth information and then plan the shortest weeding path. Figure 4 shows specific details.

5.2. Data Collection and Preprocessing

The corn field data of this experiment were collected in the agricultural experimental field, called “Nong Cui Yuan,” of Anhui Agricultural University. According to the time of seeding and growth of corn, our data collection days are from May 1 to May 4 in 2019. In order to ensure a clear image collection, our collection time depended on a strong visibility which was 9 : 00 AM to 12 : 00AM and 2 : 00PM to 5 : 00 PM. In the collection process, images of corn and weeds under natural conditions were collected in the three directions of head-up, top-view, and 45 degree squint, and the collection steps were strictly followed. The image data acquisition equipment used a high-definition digital camera (Canon EOS 6D Mark II camera). A total of 3906 corns with weed images were collected. All images are in JPEG format with resolution of 54723648 pixels. Some of the corn and weed images collected are shown in Figure 5. Figures 5(a)5(c) show different numbers of corn and weeds in the agricultural experimental field, respectively.

To improve the computational time efficiency as well as to preserve the useful information in the images, it is better to compress and reduce the resolution to 500400 according to our experience. In this dataset, 3500 samples are randomly set as training data, and the remaining 406 samples are testing data. The ratio is 8 : 2 according to the ratio of the training set and the verification set. The dataset was manually labeled, and the corn and weed in the images are labeled by using the minimum circumscribed rectangle method.

6. Experimental Results and Evaluation

6.1. Results of Data Preprocessing

In data preprocessing, Figure 6 shows the result of corn and weed recognition as well as automatic weed removing for the Section 4.1.

Figure 7 shows the effect of grayscale image corresponding to green area extraction methods for the Section 4.1. Among them, the first column of Figure 7 is the grayscale image processing results of the EXG method, the second column of Figure 7 is the grayscale image processing results of the GMR method, and the third column of Figure 7 is the grayscale image processing results of the Cg method.

6.2. Evaluation of the Extraction Methods for Greenness Detection

To better perform grayscale image processing, we select the EXG method for extracting the green area of the target image and use it to extract the green area of the corn and weed targets in the field crop image at the same time. In this study, the target of maize and weeds after shearing is taken as the research object. The only change of the grayscale generation method is used to generate multiple sets of binary image experimental data. We compared the green area extraction effect of the EXG method and GMR method in the RGB color space as well as the Cg method in the YCrCb color space. The detailed experimental results are shown in Figure 8.

In this study, we introduced a standard image as a benchmark (Figure 8). The traditional OTSU algorithm which only changes the grayscale image generation method generates multiple sets of binary image experimental data for the comparison. Use the three indicators of ratio, variance, and standard deviation to measure the pros and cons of the EXG method, GMR method, and Cg method. Where, the ratio is defined as the ratio of the sum of the number of pixels in the black area to the sum of the number of pixels in the black area in the standard image after the binary image is generated by the unoptimized OTSU algorithm. The equation (16) is the equation for calculating variance. The detailed comparison results are shown in Table 1.where is the ratio, is the average value of the ratio, and is the number of experimental images participating in the comparison.


Color indicesFormulaRatio

EXG1.0430.0010.031
GMR0.8770.0250.160
Cg1.0020.0020.044

Combining the results of Figure 8 and Table 1, it can be seen that the average ratio of the Cg method in the YCrCb color space is the smallest. Due to its obvious error in the extraction of the green area of the weed target, a large area is classified in a wrong way. And it also results in an average ratio abnormality. The EXG method in the RGB color space can effectively suppress the interference of environmental factors such as soil background, dry grass, and shadow. It is more prominent for the green area of the image of extracting corn and weed targets. It is suitable for processing field crop images under natural conditions, which can obtain an ideal grayscale image effect.

6.3. Results of Edge Detection

By comparing the experimental results of edge detection, we can find that the edge detection algorithm based on the Canny operator has a perfect effect [48]. The specific effect is shown in Figure 9.

6.4. Evaluation of the Quadratic Traversal Algorithm

A quadratic traversal algorithm is proposed for selecting the target 2D coordinate points. By taking a single plant of corn and weeds in the keyframe image as the research goal, a series of image processing and optimization operations are used to obtain the key corn and weed target edge contour image. The quadratic traversal algorithm is mainly based on the target edge contour image as a research basis. Through the combination of the first traversal and the second traversal, it is possible to accurately select the 2D coordinate points’ information of the target on the keyframe image.

The detailed process is shown in Figure 10. First, we need to cut out the target recognition results of the keyframe image and the data preparation after a series of zooming processes. Second, the EXG method in the RGB color space is used to complete the grayscale processing of the image. For the grayscale image, the IOTSU algorithm can be used to generate and optimize the binary image, and then, the Canny operator’s edge detection algorithm can be used to perform binary image processing to achieve target edge contour extraction. Finally, in the target edge contour image, the first traversal of the quadratic traversal algorithm uses the traversal search box to search the pixel area that meets the set conditions as well as cut and store it locally for the second traversal. In the quadratic traversal algorithm, the second traversal takes the output of the first traversal as the research content, also uses the other traversal search box to search the pixel area that meets the set conditions, and selects the pixel fixed as close as possible to the center position. Besides, according to the pixel point using equation (14), the pixel point can be calculated back to the corresponding position of the target in the keyframe image, so as to realize the research work of selecting 2D coordinate points.

For the design of the traversal search box in the first traversal method, this study is based on the size of the target edge contour image (500 × 400) processed by the edge detection, and the traversal search box can traverse the target contour edge image as much as possible. Therefore, the traversal search boxes with sizes of 50 × 40, 50 × 100, and 100 × 100 were designed to select the appropriate traversal search box size. We take thirty corn and weed target contour images as the total sample number, and the total number of successful samples and failed samples of each traversal search box were counted, respectively. Table 2 presents the performance of various traversal search box sizes. The success rate of the traversal search box with a size of 100 × 100 can reach 90.0%, which can meet the actual requirements of the experiment.


Traversal boxExperimental sampleSuccess sampleFailed sampleSuccess rate (%)

50 × 403022873.3
50 × 1003026486.6
100 × 1003027390.0

6.5. Evaluation of Depth Ranging and Shortest Path Planning

In order to verify the feasibility of our proposed research method, we conducted a systematic test in the agricultural experimental field, called “Nong Cui Yuan,” of Anhui Agricultural University. On the one hand, we started by the keyframe image in the colored image video stream captured in real-time by the depth camera. Through the method proposed in this study, the 2D coordinate points of the corn-weed targets in the field crop image can be accurately selected. Finally, multitarget depth ranging and shortest weeding path planning can be achieved, and the system test has achieved favorable experimental results.

Figure 11 mainly shows the results of multitarget ranging and the shortest weeding path planning results in different scenarios. The results in Figure 11 are highlighted for better results observation. Among them, Figure 11(a) is the result of multitarget ranging and the shortest path planning of a single plant of corn and multiple weeds under 45 degree squint angles. Figure 11(b) is the result of multitarget ranging and the shortest path planning of a single plant of corn and multiple weeds under top-view angles. As a complex and special scenario, Figures 11(c) and 11(d) are the results of multitarget ranging and the shortest path planning of multiple corn and multiple weeds. By further observing the results of Figure 11, we can find that the first column in Figures 11(a)∼11(d) are the original images obtained by the depth camera. The second column in Figures 11(a)∼11(d) are the truthful shortest weeding path planning results images obtained by manual measurement. Staring from the field corn target, each step is marked with the corresponding number in the direction shown by the arrow. In Figures 11(a)∼11(d), the third column in the figure shows the result of multitarget ranging. The upper left corner of this column of the figure shows the result of the depth camera ranging, and the corresponding 2D coordinate points of the target are marked. The experimental effect of our proposed algorithm is the fourth column in Figures 11(a)∼11(d). The extraction results of the shortest weeding path are shown in Figures 11(a)∼11(d) in the fifth column in Figure 11.

On the other hand, take a single plant of corn and multiple weeds as examples in a scenario. Figure 12 shows the detailed processing flow. We collected distance data by the manual measurement method in the agricultural experimental field. The distance between the corn target and the weed target as well as the distance between the weed target and the weed target in the corresponding field of view of the depth camera are recorded, respectively. Here, the unit of the distance data is also in meters, and the corresponding distance data statistics between the targets are shown in Figure 3. At the same time, according to the recorded distance data statistics, combined with the shortest path planning algorithm Dijkstra’s ideas, we manually calculate the corresponding shortest weeding path. The specific calculation process is shown in Figure 3. By observing and comparing the extraction results of the shortest weeding path in Figure 12(a) and the result of artificially calculating the corresponding shortest weeding path in Figure 13(b), it is not difficult to find that the shortest weeding path of the two methods can be consistent. Thus, it further illustrates the feasibility and accuracy of our research method.

7. Comparison to Other Works

7.1. Performance Comparison of Image Segmentation

On the basis of obtaining an ideal grayscale image, the next research is to perform binary image generation and optimization on the grayscale image. In this study, the PSO algorithm [49, 50], traditional OTSU algorithm [51], and the proposed IOTSU algorithm were used to segment the corn and weed targets in the field crop image, respectively, to generate a binary image. Figure 9 shows the original image of the corn and weed target as an example to study the image segmentation effect of the PSO algorithm, traditional OTSU algorithm, and IOTSU algorithm. It is obvious that the segmentation effect of the IOTSU algorithm is better than the traditional OTSU algorithm and the PSO algorithm which loses a lot of important information.

In order to further evaluate the performance of the segmentation algorithm, the three aspects of segmentation threshold, running time, and diversity rate are evaluated, respectively, where the diversity rate can be calculated by equation (17). Table 3 lists the performance of the three segmentation algorithms, where the segmentation threshold and running time are average values.where is the average segmentation threshold of the segmentation algorithm, and represents the average segmentation threshold of the reference algorithm OTSU.


AlgorithmOriginal image (pixels × pixels)Cut image (pixels × pixels)Threshold valueRunning time (s)Diversity rate (%)

PSO640 × 480500 × 400130.30.15416.1
OTSU640 × 480500 × 400112.30.5070
IOTSU640 × 480500 × 400112.30.2250

As shown in Table 3, although the PSO algorithm can meet the needs of real-time processing, compared with the traditional OTSU algorithm and the IOTSU algorithm, it has a faster processing speed, but its segmentation performance is poor and the difference rate is large. Moreover, it is unable to find the best segmentation threshold. Although the traditional OTSU algorithm can find the optimal segmentation threshold, the high cost of time cannot meet the real-time requirements because it needs to traverse the entire grayscale space to find the optimal threshold. Compared to the traditional OTSU algorithm, the IOTSU algorithm can ensure the segmentation performance and can be processed at a faster speed under the condition that the segmentation threshold is consistent. At the same time, the processing time of the PSO algorithm is close, so it can meet the needs of real-time image processing.

7.2. Performance Comparison of Path Planning

In order to further evaluate the performance of the quadratic traversal algorithm, the two aspects of running time and diversity rate are evaluated, respectively, where the diversity rate can be calculated by equation (18). Table 4 lists the performance of the three different algorithms.where is the sum of the algorithm’s estimated distances of multiple weeds, and is the sum of the true values of the multitarget weeds distance manually measured.


AlgorithmsNumber of targetsRunning time (s)Diversity rate (%)

Genetic algorithm [52]74.2018.3
Path planning algorithm [28]78.6332.4
Quadratic traversal algorithm79.103.90

As shown in Table 4, although the genetic algorithm has a faster processing speed, compared with the path planning algorithm and the quadratic traversal algorithm, it must be processed when the position of the target is known. Therefore, the genetic algorithm cannot meet the real-time processing requirements of agricultural weeding robots. At the same time, we can find that the running time of the path planning algorithm and quadratic traversal algorithm is close. In order to further illustrate the superiority of our proposed quadratic traversal algorithm, we compared the diversity rate of these three algorithms on the premise of the same number of targets. It is not difficult to find that our proposed method has a lower diversity rate than the genetic algorithm and path planning algorithm. Therefore, the various performances of our proposed method can basically meet the real-time processing requirements of agricultural weeding robots.

8. Conclusions

In this study, the task of weed one-time removal operations in cornfields is studied. Major innovations and contributions of this study are as follows:(1)The Faster R-CNN deep network model based on the VGG-16 feature extraction network is used to realize real-time target recognition and complete automatic cut classification of targets. By returning the predicted parameter information of the border regression and the color of the prediction border, the target category in the image can be accurately determined. And we realized the data connection between deep learning and traditional algorithms.(2)Using our improved OTSU algorithm can achieve the generation and optimization of binary images. Compared with the traditional OTSU algorithm, the algorithm compresses the range of the search grayscale interval. The search speed of the algorithm is effectively improved, and the proposed path planning calculation is time efficient. It meets the demand of the real-time data processing requirements, which allows that our method can be further applied to the mobile agricultural weeding robot in the field.(3)A quadratic traversal algorithm is proposed for selecting the target 2D coordinate point. The corresponding traversal search box is designed; the search success rate of the traversal search box with a size of 100 × 100 can reach 90.0% of the testing date effect. By transforming the target 2D coordinate point in the pixel coordinate system into the 3D coordinate point in the camera coordinate system, the used depth camera can achieve the multitarget depth ranging and the shortest weeding path planning as well as avoid using complex point cloud information. This way effectively saves computing resources and avoids redundant information.(4)The application and implementation of our methods in this research can assist intelligent weeding robots to carry out precise weeding operations and improve the efficiency of them. Meantime, it has an important practical significance for promoting the application of intelligent weeding robots to the filed.

9. Future Works

Our future works includes two aspects: (1) quantitative analysis of power consumption for the robot. (2) Consider the influence of outdoor dynamic environmental factors.(1)Our optimized weeding path planning has the potential to reduce power consumption for the robot. In order to prove this qualitative conclusion, we will include a quantitative analysis and carry out quantitative analysis experiments in our future works.(2)Because the depth camera will be affected by different environmental factors in the outdoor, we will design corresponding experiments based on various environmental conditions in our future works. At the same time, we will fuse multiple sensors to develop more robust algorithms and overcome the weakness of the current used depth camera.

Data Availability

The corn field data of this experiment were collected in the agricultural experimental field, called “Nong Cui Yuan,” of Anhui Agricultural University and are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was funded by the National Natural Science Foundation of China (31271615) and supported by the Ministry of Agriculture and Rural Affairs International Cooperation Project (125A0607) and the Ministry of Agriculture “948 Plan” to continue supporting key projects (2016-X34, 2015-Z44).

References

  1. C. Chang and K. Lin, “Smart agricultural machine with a computer vision-based weeding and variable-rate irrigation scheme,” Robotics, vol. 7, pp. 1–17, 2018. View at: Publisher Site | Google Scholar
  2. S. Sabzi and Y. Abbaspour-Gilandeh, “Using video processing to classify potato plant and three types of weed using hybrid of artificial neural network and partincle swarm algorithm,” Measurement, vol. 126, pp. 22–36, 2018. View at: Publisher Site | Google Scholar
  3. H. David, D. Feras, P. Tristan, and M. Chris, “A rapidly deployable classification system using visual data for the application of precision weed management,” Computers and Electronics in Agriculture, vol. 148, pp. 107–120, 2018. View at: Publisher Site | Google Scholar
  4. A. Chandez, V. Tewari, and S. Kumar, “On-the-go position sensing and controller predicated contact-type weed eradicator,” Current Science, vol. 114, pp. 1485–1484, 2018. View at: Publisher Site | Google Scholar
  5. T. Bakker, K. Asselt, J. Bontsema, J. Müller, and G. Straten, “Systematic design of an autonomous platform for robotic weeding,” Journal of Terramechanics, vol. 47, no. 2, pp. 63–73, 2010. View at: Publisher Site | Google Scholar
  6. H. Wang, W. Mao, G. Liu, X. Hu, and S. Li, “Identification and location system of multi-operation apple robot based on vision combination,” Transactions of the Chinese Society of Agricultural Machinery, vol. 43, pp. 165–170, 2012. View at: Google Scholar
  7. M. Montalvo, J. M. Guerrero, J. Romeo, L. Emmi, M. Guijarro, and G. Pajares, “Automatic expert system for weeds/crops identification in images from maize fields,” Expert Systems with Applications, vol. 40, no. 1, pp. 75–82, 2013. View at: Publisher Site | Google Scholar
  8. R. Bogue, “Robots poised to revolutionise agriculture,” Industrial Robot: An International Journal, vol. 43, no. 5, pp. 450–456, 2016. View at: Publisher Site | Google Scholar
  9. O. Bawden, J. Kulk, R. Russell et al., “Robot for weed species plant-specific. management,” Journal of Field Robotics, vol. 34, no. 6, pp. 1179–1199, 2017. View at: Publisher Site | Google Scholar
  10. J. M. Guerrero, M. Guijarro, M. Montalvo et al., “Automatic expert system based on images for accuracy crop row detection in maize fields,” Expert Systems with Applications, vol. 40, no. 2, pp. 656–664, 2013. View at: Publisher Site | Google Scholar
  11. P. Lottes, M. Hoeferlin, S. Sander, M. Müter, P. Schulze, and L. C. Stachniss, “An effective classification system for separating sugar beets and weeds for precision farming applications,” in Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 5157–5163, Stockholm, Sweden, May 2016. View at: Google Scholar
  12. A. Bechar and C. Vigneault, “Agricultural robots for field operations: concepts and components,” Biosystems Engineering, vol. 149, pp. 94–111, 2016. View at: Publisher Site | Google Scholar
  13. M. P. Arakeri, B. P. Kumar, S. Barsaiya et al., “Computer vision based robotic weed control system for precision agriculture,” in Proceedings of the 2017 International Conference on Advances in Advances in Computing and Communications, pp. 1201–1205, Udupi, India, August 2017. View at: Google Scholar
  14. J. Bengochea-Guevara, J. Conesa-Muñoz, D. Andújar, and A. Ribeiro, “Merge fuzzy visual servoing and GPS-based planning to obtain a proper navigation behavior for a small crop-inspection robot,” Sensors, vol. 16, no. 3, p. 276, 2016. View at: Publisher Site | Google Scholar
  15. A. Maruyama and K. Naruse, “Development of small weeding robots for rice fields,” in Proceedings of the IEEE/SICE International Symposium on System Integration, pp. 99–105, Tokyo, Japan, December 2014. View at: Google Scholar
  16. L. Nan, Z. Chunlong, C. Ziwen et al., “Crop positioning for robotic intra-row weeding based on machine vision,” International Journal of Agricultural and Biological Engineering, vol. 8, no. 6, pp. 20–29, 2015. View at: Google Scholar
  17. Q. Zhang, M. E. Shaojie Chen, B. Li et al., “A visual navigation algorithm for paddy field weeding robot based on image understanding,” Computers and Electronics in Agriculture, vol. 143, pp. 66–78, 2017. View at: Publisher Site | Google Scholar
  18. F. B. P. Malavazi, R. Guyonneau, J.-B. Fasquel, S. Lagrange, and F. Mercier, “LiDAR-only based navigation algorithm for an autonomous agricultural robot,” Computers and Electronics in Agriculture, vol. 154, pp. 71–79, 2018. View at: Publisher Site | Google Scholar
  19. S. Gokul, R. Dhiksith, S. A. Sundaresh et al., “Gesture controlled wireless agricultural weeding robot,” in Proceedings of the International Conference on Advanced Computing, pp. 926–929, Cairo, Egypt, October 2019. View at: Google Scholar
  20. Ł. Chechliński, B. Siemiątkowska, and M. Majewski, “A system for weeds and crops identification—reaching over 10 FPS on raspberry pi with the usage of MobileNets, DenseNet and custom modifications,” Sensors, vol. 19, p. 3787, 2019. View at: Publisher Site | Google Scholar
  21. S. Kanagasingham, M. Ekpanyapong, R. Chaihan et al., “Integrating machine vision-based row guidance with GPS and compass-based routing to achieve autonomous navigation for a rice field weeding robot,” Precision Agriculture, vol. 21, no. 4, pp. 831–855, 2020. View at: Publisher Site | Google Scholar
  22. K. H. Choi, S. K. Han, S. H. Han, K.-H. Park, K.-S. Kim, and S. Kim, “Morphology-based guidance line extraction for an autonomous weeding robot in paddy fields,” Computers and Electronics in Agriculture, vol. 113, pp. 266–274, 2015. View at: Publisher Site | Google Scholar
  23. M. A. Hossain and I. Ferdous, “Autonomous robot path planning in dynamic environment using a new optimization technique inspired by bacterial foraging technique,” Robotics and Autonomous Systems, vol. 64, pp. 137–141, 2015. View at: Publisher Site | Google Scholar
  24. M. A. Contreras-Cruz, V. Ayala-Ramirez, and U. H. Hernandez-Belmonte, “Mobile robot path planning using artificial bee colony and evolutionary programming,” Applied Soft Computing, vol. 30, pp. 319–328, 2015. View at: Publisher Site | Google Scholar
  25. D. Andújar, J. Dorado, C. Fernández-Quintanilla, and A. Ribeiro, “An approach to the use of depth cameras for weed volume estimation,” Sensors, vol. 16, no. 7, p. 972, 2016. View at: Publisher Site | Google Scholar
  26. A. Bakhshipour and A. Jafari, “Evaluation of support vector machine and artificial neural networks in weed detection using shape features,” Computers and Electronics in Agriculture, vol. 145, pp. 153–160, 2018. View at: Publisher Site | Google Scholar
  27. Y. Xu, Z. Gao, L. Khot, X. Meng, and Q. Zhang, “A real-Time weed mapping and precision herbicide spraying system for row crops,” Sensors, vol. 18, no. 12, p. 4245, 2018. View at: Publisher Site | Google Scholar
  28. X. Ya, G. Yuan, L. Yun, and B. Simon, “Development of a prototype robot and fast path-planning algorithm for static laser weeding,” Computer and Electronics in Agriculture, vol. 142, pp. 494–503, 2017. View at: Publisher Site | Google Scholar
  29. B. Liu, R. Li, H. Li, G. You, S. Yan, and Q. Tong, “Crop/Weed discrimination using a field imaging spectrometer system,” Sensors, vol. 19, no. 23, p. 5154, 2019. View at: Publisher Site | Google Scholar
  30. X. P. Burgos-Artizzu, A. Ribeiro, M. Guijarro, and G. Pajares, “Real-time image processing for crop/weed discrimination in maize fields,” Computers and Electronics in Agriculture, vol. 75, no. 2, pp. 337–346, 2011. View at: Publisher Site | Google Scholar
  31. R. Ji and L. Qi, “Crop-row detection algorithm based on random hough transformation,” Mathematical and Computer Modelling, vol. 54, no. 3-4, pp. 1016–1020, 2011. View at: Publisher Site | Google Scholar
  32. X. L. Huang, W. D. Liu, C. L. Zhang et al., “Optimal design of rotating disc for intra-row weeding robot,” Transactions of the CSAM, vol. 43, pp. 42–46, 2012. View at: Google Scholar
  33. I. Vidović, R. Cupec, H. Željko et al., “Crop row detection by globalenergy minimization,” Pattern Recognition, vol. 55, pp. 68–86, 2016. View at: Publisher Site | Google Scholar
  34. Z. Le, J. Xiu, F. Lei, and Li. Shao, “Recognition method for weeds in rapeseed field based on Faster R-CNN deep network,” Laser & Optoelectronics Progress, vol. 57, pp. 304–312, 2020. View at: Google Scholar
  35. S. Ren, K. He, and R. Girshick, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017. View at: Publisher Site | Google Scholar
  36. R. Girshick, “Fast R-CNN,” in Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1440–1448, Washington, NJ, USA, March 2015. View at: Google Scholar
  37. J.-L. Tang, X.-Q. Chen, R.-H. Miao, and D. Wang, “Weed detection using image processing under different illumination for site-specific areas spraying,” Computers and Electronics in Agriculture, vol. 122, pp. 103–111, 2016. View at: Publisher Site | Google Scholar
  38. D. M. Bulanon, T. Kataoka, Y. Ota, and T. Hiroma, “AE-automation and emerging technologies,” Biosystems Engineering, vol. 83, no. 4, pp. 405–412, 2002. View at: Publisher Site | Google Scholar
  39. Y. Zheng, Q. Zhu, M. Huang, Y. Guo, and J. Qin, “Maize and weed classification using color indices with support vector data description in outdoor fields,” Computers and Electronics in Agriculture, vol. 141, pp. 215–222, 2017. View at: Publisher Site | Google Scholar
  40. S. L. Bangare, A. Dubal, P. S. Bangare, and S. T. Patil, “Reviewing otsu's method for image thresholding,” International Journal of Applied Engineering Research, vol. 10, no. 9, pp. 21777–21783, 2015. View at: Publisher Site | Google Scholar
  41. A. M. A. Talab, Z. Huang, F. Xi, and L. HaiMing, “Detection crack in image using otsu method and multiple filtering in image processing techniques,” Optik, vol. 127, no. 3, pp. 1030–1033, 2016. View at: Publisher Site | Google Scholar
  42. A. Bakhshipour, A. Jafari, S. M. Nassiri, and D. Zare, “Weed segmentation using texture features extracted from wavelet sub-images,” Biosystems Engineering, vol. 157, pp. 1–12, 2017. View at: Publisher Site | Google Scholar
  43. W. Wang and C. Li, “Size estimation of sweet onions using consumer-grade RGB-depth sensor,” Journal of Food Engineering, vol. 142, pp. 153–162, 2014. View at: Publisher Site | Google Scholar
  44. Y. Chéné, D. Rousseau, P. Lucidarme et al., “On the use of depth camera for 3D phenotyping of entire plants,” Computers and Electronics in Agriculture, vol. 82, pp. 122–127, 2012. View at: Publisher Site | Google Scholar
  45. C. Nock, O. Taugourdeau, S. Delagrange, and C. Messier, “Assessing the potential of low-cost 3D cameras for the rapid measurement of plant woody structure,” Sensors, vol. 13, no. 12, pp. 16216–16233, 2013. View at: Publisher Site | Google Scholar
  46. S. Paulus, J. Behmann, A.-K. Mahlein, L. Plümer, and H. Kuhlmann, “Low-cost 3D systems: suitable tools for plant phenotyping,” Sensors, vol. 14, no. 2, pp. 3001–3018, 2014. View at: Publisher Site | Google Scholar
  47. B. Nenchoo and S. Tantrairatn, “Real-Time 3D UAV pose estimation by visualization,” Proceedings, vol. 39, no. 1, p. 18, 2020. View at: Publisher Site | Google Scholar
  48. J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679–698, 1986. View at: Publisher Site | Google Scholar
  49. F. Marini and B. Walczak, “Particle swarm optimization (PSO). A tutorial,” Chemometrics and Intelligent Laboratory Systems, vol. 149, pp. 153–165, 2015. View at: Publisher Site | Google Scholar
  50. W.-B. Du, Y. Gao, C. Liu, Z. Zheng, and Z. Wang, “Adequate is better: particle swarm optimization with limited-information,” Applied Mathematics and Computation, vol. 268, pp. 832–838, 2015. View at: Publisher Site | Google Scholar
  51. S. Lavania and P. S. Matey, “Novel Method for weed classification in maize field using otsu and PCA implementation,” in Proceedings of the 2015 IEEE International Conference on Computational Intelligence & Communication Technology, pp. 534–537, Ghaziabad, India, February 2015. View at: Google Scholar
  52. P. S. Akshatha, V. Vashisht, and T. Choudhury, “Open loop travelling salesman problem using genetic algorithm,” International Journal of Innovative Research in Computer & Communication Engineering, vol. 1, no. 1, 2013. View at: Google Scholar

Copyright © 2021 Le Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views218
Downloads207
Citations

Related articles