Table of Contents Author Guidelines Submit a Manuscript
Journal of Robotics
Volume 2015 (2015), Article ID 436490, 7 pages
Research Article

Visual Control System of a Spraying Robot for Hyphantria cunea Larva Nets

School of Mechanical & Automotive Engineering, Liaocheng University, Liaocheng 252059, China

Received 9 February 2015; Revised 4 May 2015; Accepted 12 May 2015

Academic Editor: Giovanni Muscato

Copyright © 2015 Ying Zhao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


In order to implement automatic spraying on Hyphantria cunea larva nets, a spraying robot system with monocular hand-eye coordination and smart targeting abilities was designed according to the target net features. The system realized spatial two-dimensional motions driven by step motors on linear guide rails. Images were processed in real-time to extract the net curtain targets defined using the border area, and the optimal spraying position was then determined. An identification algorithm based on the global net image to distinguish targets before and after spray was proposed. A simulation environment was designed to verify the correctness of this method. Results showed that the highest rate of over spray is 288.5%, and the spray miss rate is 0.

1. Introduction

Hyphantria cunea is listed as a worldwide quarantine pest, because of its strong reproductive ability and wide endangering [1]. In north China, the Hyphantria cunea can breed three generations every year. Each generation contains three periods: larva, pupa, and imago. The most harmful time is larva stage since the larvae will spin nets on the leaves even within several hours from when they have hatched. They eat a large amount of leaves in larval stage, which caused plant growing weakness and death. The perfect time to manage the pest is in larval stage because the nets larvae spun on the leaves are obvious. Recently, the methods of pest management are manually removing the nets and spraying insecticide in a large area [2].

Precision agriculture requires killing the insects using targeted spraying based on machine vision. There are many reports on this problem [38]; the vision control system (VCS) is the key work to this research. Staniak and Zieliński [9] reported the influence of model calibration of a vision servo control system. Gans et al. [10] developed a new kind of error vector based on Lyapunov visual servo controller to make the image more stable. Kosmopoulos et al. [11, 12] established and solved the Jacobian matrix based on visual servo which enabled the manipulators to be fast, stable, and accurately controlled.

Zhang and Xiong [13] proposed an algorithm of a high-speed visual servo system used in a robot dexterous arm that can achieve ball path planning. Guo et al. [14] developed a linear control method based on image visual servo catch and controlled the mechanical arms accurately. Wang et al. [15] designed an ADRC visual servo controller based on images, and the end of the mechanical arm can be controlled by optimizing the parameters of the controller using niche PSO. In terms of spraying, identification of single frame image in the target area has been investigated [16]. However, there are still no reports of the vision control system.

In order to spray accurately, it is necessary to set target and design a spraying plan. Our research was based on a monocular hand-eye coordination actuator and then to build a vision control system of the robot. We proposed a fast determination algorithm that combined the growth habit of the pests and tested it in a simulated environment.

2. General Mechanical Design of the System

2.1. Setup of the Working Mode

The spraying device was driven by a lift arm to spot-spray in a circular motion due to the large size of the plant. When the spraying device works, the lifting arm was raised to a certain height corresponding to the height of the plant. Then, the hand-eye actuators partly move on the spraying holder, in order to cover the whole plant. Hyphantria cunea larvae enjoy light, so the nets span on the outermost of a plant. The three-dimensional movement of the hand-eye actuator would cause low processing efficiency. Therefore, the main movement of the hand-eye actuator was plane motion. As the lift arm is rising, the spraying range could cover the top of the plant in this direction. Figure 1 shows the relationship between the spraying device and plant location.

Figure 1: Sketch map of the relationship between the spraying device and plant location.
2.2. The Structure of a System and the Control Principle

Horizontal and vertical guide rails were set up on the upright spraying holder. A slider equipped with nozzle and camera was employed as the hand-eye actuators. The hand-eye actuators were fixed on horizontal guide rails, with stepping motor and primary synchronous belt being used to implement the horizontal motion. The secondary synchronous belt driven by horizontal guide rail performs up and down reciprocating motions that can cover the whole area. The dolly was employed as the fulcrum for the holder and should be equipped with a lifting arm; while the box can be set on the back of the dolly, the computer controller and pesticide box were equipped inside the box. The structure of the system and the control principle were shown in Figures 2 and 3.

Figure 2: Sketch map of the overall structure of the system.
Figure 3: Sketch map of the control system.

3. The Working Process of the System

3.1. Scanning Movement Configuration and General Control Strategy

In order to cover the whole plant, improve the efficiency, and reduce the spray miss rate, a scanning path has been designed for the hand-eye actuators (see Figure 4).

Figure 4: Sketch map of the main scanning motion.

(, , ) is a coordinate system fixed on the holder and is the distance between each scanning line, decided by nozzle covering range of the plant. The spray coverage and the camera visual field coverage are shown in Figure 5. Besides, was decided by experiment.

Figure 5: Sketch map of the spray coverage and the camera visual field coverage.

In order to enhance the size of visual coverage and prevent splashed pesticide from polluting the camera, the distance between the actuator and a plant should be kept to a certain value and remain constant during spraying. According to the premise and the habit of the plant, it is assumed that part of the holder was parallel to the solid face of the plant. In theory, the covered area was larger when the distance between a solid face and nozzle is further away. However for the same target area, the captured image is smaller when the distance is farther within the valid field. Therefore, the nearest solid face of the plant from the actuator was chosen as the specific vertical plane. The camera’s angle was set, so that the camera optical center and spray center overlap. By adjusting the spray angle of the nozzle, the visual field of the camera was smaller than the covered area and enough pressure for spraying was needed during the whole test. For simplicity, the square inside the circular area covered by one spray was set as the spraying coverage template (in the image named ); the side length of the square was . To avoid spray miss rate some overlapping was needed; set .

When the distance between an actual net and the actuator is more than , some error will appear using this hypothesis. However the movement of the spraying was cursory and the covered area will be larger with farther distance. After many times of experiments, the spray miss rate is 0. It is useful for us to obtain deep information. At the same time, the process efficiency can be improved. The overall control strategy of servo control system was shown in Figure 6.

Figure 6: The overall control strategy of servo control system.
3.2. To Decide the Optimum Spraying Position

During the motion of an actuator, according to previous work [16], the edge profile of each net was picked up on every picture. The method is as follows: according to color distribution characteristics of Hyphantria cunea larva nets, RGB color space is selected and the difference of data of each channel for net curtains leaves and branches is analyzed. Furthermore, (R-B) color model with the Otsu method and threshold algorithm are used to divide images. The region labeling and Freeman coding methods are adopted to calculate the area of each region. The double threshold value is determined and residual noise is removed using the mean and standard deviation of a plurality area. According to different sizes of large areas, fine white and white regions are compensated using improved expansion corrosion method.

However there is no need to spray every area; it is necessary to decide the optimum spraying position. A method to judge the boundary edges has been proposed to solve the difficulties due to many disordered shapes of the nets. In this research, the scanning direction was set to be up, left, and right, and there is no spraying in the upward motion. The situations when moving to the right and left are symmetric; therefore take the right direction, for example. In the uniform motion to the right, determine the spraying coverage template and the length of the side and the height and width and on the image. Choose of the left edge of the spraying coverage template and set it as the spraying sensitive region named Left (see Figure 7), or if the direction is to the left choose the right side of the image. The optimum spraying position can be determined when the no-spraying net appears in this area during the motion. The spraying progress will not end until there is no object in this area. Because of the droplet, spraying and judgment cannot work at the same time. Once the optimum spraying position is decided, the actuator stops to spray.

Figure 7: Sketch map of the spraying sensitive region.
3.3. To Sign the Spraying Net

It is necessary to sign the spraying net in order to reduce repeated operations in two scanning lines. The study proposed a method to build the image of the whole net. The optimum spraying position was chosen and the computer will sign the position on the whole net image. If the target was found for the second time, then decide to spray or not according to the whole net image as follows.

(1) Setup. Assuming the frame rate can be expressed as  fps then the time interval of the adjacent frame images is . If the speed of relative movement of the camera is (; corresponding to different velocity of the actuator), then the corresponding distance during the time interval is . According to the premise and the image-forming principle, the transformation of image of this motion is , where is the focal length of the camera.

(2) The Strategy of Creating the Local Net Curtain Image. A part of the whole net image was set as a varied image with the motion of the actuator. It was assumed that any target objects entering the visual field for the first time. Firstly, check the outline of the object and decide if the object is in contact with the edge of the image. If not, find out the minimum square that embraces the outline, and record the length of the two sides of the square as . If there are and , then record this as condition 1. Any other cases are recorded as condition 2.

For condition 1, the process can be finished by one spraying; therefore only sign the position without building the whole net image.

For condition 2, set binarization of this image as , open up a dynamic memory in the storage named as (), and give an initial value . The value of will increase when the image moves one frame forward. Move to the left bit (right side of the image filled with 0 while the exceeded part of the image on the left is abandoned) and then perform OR operation with the next frame binary image . The right side pixels of the obtained image are then pressed into the dynamic memory . Repeat this method until there is no net in the sensitive area called Left, so that is left with a whole net for one spray. The initial and terminative values of the coordinates of the actuator are recorded as and . If there are many nets in one horizontal motion then repeat the above processes; otherwise mark this travel as empty.

(3) To Judge the Spraying Net or Nonspraying Net. When the apparatus moves to the next horizontal stroke and detects the target, it should decide which condition has been encountered. For condition 1, sign the position immediately, while, for condition 2, it needs further judgments.

If the last stroke is empty, then sign this area as the no-spraying area. Otherwise, the ordinate of the actuator should be checked at first, and then compare this value in turn with all the ordinate values of . This area can be signed as the no-spraying area if the values are not correlated. (If the difference is more than one-spraying overlapped temples it can be regarded as uncorrelated.) If not, pick up the area (the same size as the original image) of ordinate of actuator in , named . Open up another space in the storage (), copy the binarization of this image (), choose the 1/4 line at the bottom of ( to line) and 1/4 line at the top of (0 to line), perform AND operations, and then reverse it. Then perform bitwise AND operations with and sign the result as . Count the number of nonzeros in and and mark it as and . If then this area has no use for spraying. Otherwise this area can be signed as having no need for spraying. Areas should be detected in turn, till there is no net in the sensitive area. When the value of ordinate is more than Ai2, release the space of .

Sign this area as the nonspraying area and build the whole net image and repeat step 2.

3.4. The Definition of Multiframe Image Contour

In the same horizontal stroke greater target area cannot be completely covered and stopping spraying by each frame will lead to low efficiency. Therefore, spacer spraying is needed in actual work. The time interval should be determined by the actual speed and the size of the image. In our experiment, there are great changes of characteristics in net scene area. As shown in Figure 8, despite the different images, two frame images show the same target. So it is necessary to decide the target shown by frame images. If there are few differences between the two targets in the two adjacent frame images, this means that the same target is in the two images.(1)Set as dynamic memory structure (the th frame, the th contour image); set Area as the dynamic structure (the th frame, the th area of contour image).(2)Sign “” in the contour image of the th frame. Perform a bitwise AND on the contour image of the th frame and th frame. Bitwise AND was shown as follows:Among them, means the pixel value of the output image (the th rows and th columns). and mean the pixel value of th rows and th columns in the output image, respectively. Then, store the pixel value in a dynamic structure and sign AreaAnd as the result of bitwise AND on the image contour “” of the th frame image and the image contour “” of the th frame image.(3)Decide these two frame images. If AreaAnd/Area or AreaAnd/Area this means that the two frame images show the same image contour. Also, if Area > Areathis means that the image contour “” of the th frame image is a part of the image contour “” of the th frame image. Then, if Area ≤ Areathis means that the image contour “” of the th frame image is a part of the image contour “” of the th frame image.

Figure 8: The continuous variety of the image contour.

4. Experimental Results and Discussions

Hyphantria cunea mostly enjoys broadleaf trees. It is not easy to do experiments with an actual tree because of the huge body of the plant; therefore a mimic has been made for the French phoenix tree, which is widely planted in north China. The height of this kind of tree is 30 meters, the skin of tree is gray, and the leaves look like maple leaf with the length of 9~30 centimeters and the width of 9~35 centimeters. The crown looks like an umbrella. The borders of leaves are irregular dentate shape. The simulate tree (see Figure 9(a)) is 1.5 meters high and the width of crown is 1.5 meters. The simulate net was made of spun. Experimental prototype was shown in Figure 9(b); the height and width of the holder were 2 and 1.6 meters, respectively. Rolling linear guide rail (WL-1116534-008, WL-1116530-013) with available range of 1.61 m × 0.84 m was used to guide vertical and horizontal motion, respectively. The hand-eye actuator was fixed on the slider. The stepping motor (TYPE57BYGH314J-12) driven by four axles was used in our experiment.

Figure 9: The simulation tree (a) and experimental prototype (b).

A Lenovo Qitian M715E desktop computer was used as the processor, with specifications as follows: Intel Pentium Dual-Core E6700, 3.2 GHz CPU frequency, and 2.7 G memory. The software is Visual Studio 2010 in Windows XP system. The camera is Daheng industrial camera (DH-HV3151UC-ML), with resolution, and the frame rate is 15 fps.

Experimental result showed that the optimum distance between vertical plane and nozzle was 1.5 meters. Then, the visual field was covered and the mist of pesticide distributed densely. Adjust the simulate net to change the proportion of the square that comes into contact with the net and the area of visual field. The results showed that the longest time (including the identification and judgments of the spraying net or nonspraying net) the algorithm needed to deal with a frame of image was 57 ms. The discriminate rate of single figure was more than 90%. It is a qualified method. In our experiment the spray miss rate is 0 with a little repeated operation. The spray rate can be expressed as

was the area of the square that comes into contact with the net; was the area of actual spraying.

Real-time processing results were shown in Figure 9 (, was the area of visual field). The image order was put as their actual position. For clear display, target area was shown by black bold line in the original picture.

With the value of remaining unchanged, the spray rate changed with ; this relationship can be seen in Figure 10.

Figure 10: Real-time processing results.

Figure 11 shows the spray rate with different network areas. With smaller network areas the spray rate becomes larger, due to the covering surface and the range of the figure being approximately equal. When the was 1, corresponding spray rate was the minimum. With the increasing network areas, the spray rate was also increased. The spray rate was related to the distribution of the nets due to the unchanged spraying distance and the flare angle of nozzle. With the concentrated net distribution, the spray rate became lower. Otherwise, it will be a higher spray rate.

Figure 11: Over spray rate under different network areas.

5. Conclusion

(1)Considering the reality of a growing plant, part of the working surface can be set parallel to the solid face of the plant; the nearest surface was chosen as the specific surface. Spraying can be set as two-dimensional motion in the vertical plane. By this way, spraying can be simplified. At the same time, it is useful for us to obtain deep information.(2)According to the disorder net spin by Hyphantria cunea larva, a method is suggested to judge boundary definition. In order to improve the security of algorithm and build whole net image, we use areas instead of some parameters such as edge or sharp. It is necessary to sign spraying nets or nonspraying nets in order to reduce the spray rate.(3)Design a simulate environment, simulate broadleaf and prototype. We test the algorithm many times. Experimental results showed that, with handling time of one figure being less than 57 ms, the spray miss rate is 0. The highest rate of over spray is 288.5%. The spray rate can be controlled in a rather short range by real-time controlling the spraying nozzle angle and distance traveled.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


This work is supported by the Natural Science Foundation of Shandong Province (nos. ZR2012CQ026 and ZR2011EL038) and A Project of Shandong Province Higher Educational Science and Technology Program (nos. J11LD16 and J12LB63).


  1. Z.-Q. Yang, J.-R. Wei, and X.-Y. Wang, “Mass rearing and augmentative releases of the native parasitoid Chouioia cunea for biological control of the introduced fall webworm Hyphantria cunea in China,” BioControl, vol. 51, no. 4, pp. 401–418, 2006. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Li, J. Chen, and P. Cai, “Research progress of occurrence and comprehensive control of fall webworm [Hyphantria cunea (Drury)],” Plant Diseases and Pests, vol. 4, no. 4, pp. 32–44, 2013. View at Google Scholar
  3. C. G. Sørensen, R. N. Jørgensen, J. Maagaard, K. K. Bertelsen, L. Dalgaard, and M. Nørremark, “Conceptual and user-centric design guidelines for a plant nursing robot,” Biosystems Engineering, vol. 105, no. 1, pp. 119–129, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. H. Kurosaki, H. Ohmori, and M. Takaichi, “Development of an automatic fruit-set-reagent spraying robot for tomato plants to promote uniform fruit ripening,” Acta Horticulturae, no. 952, pp. 931–936, 2012. View at Google Scholar
  5. Y. Ogawa, N. Kondo, M. Monta et al., “Spraying robot for grape production,” Field and Service Robotics, vol. 24, pp. 539–548, 2006. View at Publisher · View at Google Scholar
  6. H. Tianxiang, Z. Jiaqiang, Z. Hongping et al., “Design method of software system for intelligent target oriented sprayer based on DSSA,” China Forestry Science and Technology, vol. 22, no. 2, pp. 68–70, 2008. View at Google Scholar
  7. C. Geng, J. Zhang, and Z. Cao, “Cucumber disease toward- target agrochemical application robot in greenhouse,” Journal of Agricultural Machinery, vol. 42, no. 1, pp. 177–180, 2011. View at Google Scholar
  8. Y. Dongfu, C. Shuren, and M. Hanping, “Weed control system for variable target spraying based on fuzzy control,” Journal of Agricultural Machinery, vol. 42, no. 4, pp. 179–183, 2011. View at Google Scholar
  9. M. Staniak and C. Zieliński, “Structures of visual servos,” Robotics and Autonomous Systems, vol. 58, no. 8, pp. 940–954, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. N. R. Gans, G. Hu, J. Shen, Y. Zhang, and W. E. Dixon, “Adaptive visual servo control to simultaneously stabilize image and pose error,” Mechatronics, vol. 22, no. 4, pp. 410–422, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. D. I. Kosmopoulos, “Robust Jacobian matrix estimation for image-based visual servoing,” Robotics and Computer-Integrated Manufacturing, vol. 27, no. 1, pp. 82–87, 2011. View at Publisher · View at Google Scholar · View at Scopus
  12. P. P. Kumar and L. Behera, “Visual servoing of redundant manipulator with Jacobian matrix estimation using self-organizing map,” Robotics and Autonomous Systems, vol. 58, no. 8, pp. 978–990, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. Y. Zhang and R. Xiong, “Real-time vision system for a Ping-Pong robot,” Scientia Sinica Informationis, vol. 42, no. 9, pp. 1115–1129, 2012. View at Publisher · View at Google Scholar
  14. W. Guo, H. Wang, Y. Jiang, and P. Sun, “Visual servo control for automatic line-grasping of a power transmission line inspection robot,” Robot, vol. 34, no. 5, pp. 620–627, 2012. View at Publisher · View at Google Scholar · View at Scopus
  15. F. Wang, J. Liu, Z. Chen, and C. Jiao, “Auto disturbances refection visual servoing control for excavator robot based on niching particle swarm optimization,” Journal of Mechanical Engineering, vol. 48, no. 1, pp. 32–38, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Zhao, Q. Sun, and G. Ge, “Image recognition algorithm of Hlyphantria cunea larva net,” Transactions of the Chinese Society for Agricultural Machinery, vol. 44, no. 9, pp. 198–202, 208, 2013. View at Publisher · View at Google Scholar · View at Scopus