Abstract

In manually propagating potato test-tube plantlets (PTTPs), the plantlet is usually grasped and cut at the node point between the cotyledon and stem, which is hardly located and is easily damaged by the gripper. Using an agricultural intelligent robot to replace manual operation will greatly improve the efficiency and quality of the propagation of PTTPs. An automatic machine vision-guided system for the propagation of PTTPs was developed and tested. In this paper, the workflow of the visual system was designed and the image acquisition device was made. Furthermore, the image processing algorithm was then integrated with the image acquisition device in order to construct an automatic PTTP propagation vision system. An image processing system for locating a node point was employed to determine a suitable operation point on the stem. A binocular stereo vision algorithm was applied to compute the 3D coordinates of node points. Finally, the kinematics equation of the three-axis parallel manipulator was established, and the three-dimensional coordinates of the nodes were transformed into the corresponding parameters X, Y, and Z of the three driving sliders of the manipulator. The experimental results indicated that the automatic vision system had a success rate of 98.4%, 0.68 s time consumed per 3 plants, and approximate 1 mm location error in locating the plantlets in an appropriate position for the medial expansion period (22 days).

1. Introduction

Potato is one of the most important crops around the world. Nowadays, the technology of the propagation of virus-free potato test-tube plantlets has solved the problem of yield and quality degradation caused by a virus infection, which increases potato yield by more than 30%. The propagation of virus-free tube potato plantlets is the basic link of the yield of the potato virus-free seed potato breeding system. In the breeding of potato test-tube plantlets, the operator cuts the stem segments with leaves of the test-tube plantlets in the mother culture dish and inserts them into the subculture dish with the medium for rapid propagation. After several generations of propagation, the number of test-tube plantlets can be multiplied, which is both time-saving and cost-saving for potato breeding [1]. The propagation of virus-free potato plantlets is a high-tech bioapplication technology, which requires investment in propagation facilities and equipment, as well as the training of skilled workers for the production of culture medium and vaccination [2]. However, the propagation of potato test-tube plantlets is a labor-intensive technique, with consequent costs. These aspects have encouraged the development of automated machines able to increase productivity and rooting success rates while reducing costs.

Research studies on automation technology and equipment of test-tube plantlets have been carried out at home and abroad. The first type is the propagation machine of test-tube plantlets. For example, Qu et al. developed a method of shearing for the wild-mouth bottle of potato plantlets, which was employed to gather, shear, and disperse the whole bottle of potato plantlets taken out of the plantlet-fetching mechanism, but the method would lead to a serious damage of the plantlets [3]. Li et al. designed a seedling picking machinery for potato test-tube plantlets, which allows for the picking of an entire bottle without damage in one time in terms of the characteristics of the potato test-tube plantlets and their culture environment [4]. Kai et al. designed a picking hand based on negative pressure adsorption for the adsorption of strip tissue culture seedlings’ stem. The force positioning method can effectively reduce the damage on the plantlets. This method simplifies the procedure of the picking of plantlets and improves the survival rate of plantlets [5]. The second type is the test-tube plantlets’ transplanting robot, which has better flexibility and precision than the transplanting machine and enjoys a better research prospect. A tissue culture system based on machine vision was designed by Toshiba Corporation of Japan. This system locates and cuts’ nodes through a 3-axis perpendicular detection arm and a transplant arm with high precision and good robustness. But its shortcoming is the low efficiency of cut and transplant [6]. Kirin Brewery, a company of Japan, developed the TOMOCA system that breaks clumps of tissue culture plants into 26 pieces in one time and transplants them into new culture flasks with every 9 pieces, which has an efficiency of 10 times higher than manual work [7]. Yang et al. developed a robot system of transplants for strip-shaped test-tube plantlets. The system uses the embedded machine vision technology to identify and locate the plantlets nodes and get, cut, and transplant the plantlets with a self-developed 5-DOF joint robot arm. The system performs well in getting and shearing plantlets, but not in the accuracy of motion positioning [8]. Due to its virtues in compact structure, high stiffness-weight ratio, and good dynamics performance, the parallel mechanism (PM) is right in the need of the test-tube plantlets’ propagation [9]. Xie et al. designed an economical self-seeking parallel transplanting robot with high precision, which could automatically take photos of the breeding pot and calculate the center position of each plantlet through image analysis and then transform the expected motion trajectory into the generalized position coordinates of the driving joints via kinematic analysis and trajectory planning, which finally realize the control of computer intelligent within the working space [10]. To improve the automation and efficiency of plug plantlets’ transplanting in the greenhouse, a high-speed plug plantlet transplanting robot was designed by making use of a 2-DOF parallel translation mechanism with a pneumatic manipulator. According to the coordinates of healthy plantlets, the manipulator was driven by the mechanism to fetch the plant plantlets in a planned path [11].

The key techniques in vision-based control include vision information acquisition strategies, target recognition algorithms, and eye-hand coordination methods [12]. To achieve the automatic transplant, the propagation robot needs to accurately identify the potato plantlets and calculate the spatial coordinates of the shear points. Yang et al. proposed a method to identify the position of single plantlets from the bottom of the seedling bottle according to the morphological characteristics of the plantlets in the culture bottle, which satisfies the precision requirements of seedling transplanting [13]. A vision-guided grasping system for Phalaenopsis tissue culture plantlets (PTCPs) was developed and tested. To mimic the manual transplantation operation, this study developed an image processing algorithm based on a binocular stereo vision system to segment the PTCP image into its constituent leaf and root components and to locate a suitable grasping position on the PTCP body. The locating algorithm was then integrated with a robotic gripper in order to construct an automatic Phalaenopsis plantlet grasping system [14]. An innovative multimodal vision system combining optical stereo vision and thermal imaging was developed. Algorithms of detection of the teat and determination of their three-dimensional position were also developed [15]. A rapid and reliable method based on binocular stereo vision was developed with the aim of effectively recognizing and locating litchi in the natural environment. A litchi recognition algorithm based on K-means clustering was presented to separate litchi from leaves, branches, and background. The experimental results showed that the proposed method could be robust against the influences of varying illumination and precisely recognizing litchi with the highest average recognition rate [16]. HE et al. proposed a machine vision algorithm that can restore plantlets’ leaves with ellipse fitting and extract parameters for automatic grafting of robots. The experiment showed that the algorithm can solve the problem of leaf occlusion, and the success rate of the plantlets’ identification and localization reached up to 97.5%, which satisfied the requirements of the automatic operation of grafting robots [17]. Yang et al. developed a highly real-time vision system, which could comprehensively evaluate the suitability of the straightness and height of the transplanted plantlets and select plantlets that satisfied the transplanting requirements [18]. Zhang et al. proposed a vision system used in the plantlet transplanting of cucurbit taking into consideration the actual working conditions of the self-made robot system. In this system, the information of the cross-sectional diameter size could quickly be extracted from the color image of the crop plantlets and then automatically detect the spatial location of growth points [19]. Luo etc. developed a method for acquiring spatial information from grape clusters based on binocular stereo vision. This method includes four steps: calibrating the binocular cameras and rectifying the images, detecting the cutting points on the peduncle and the centers of the grape berries, extracting three-dimensional spatial coordinates of the points detected in step 2, and calculating the bounding volume of the grape clusters. The demonstrated performance of this developed method indicated that it could be used on harvesting robots [20].

Combined with the actual working conditions of a self-made parallel robot system, this paper focuses mainly on locating the spatial coordinates of the cutting points on a stem of the test-tube plantlet for the end effector and presents a method for acquiring spatial information from the test-tube plantlet based on binocular stereo vision. The method for acquiring spatial information from the test-tube plantlet is presented based on binocular stereo vision. This method can determine the 3D coordinates of the cutting point on the stem for the end effector in such a way that the harvesting robot can smoothly grasp the plantlet stem and cut off the stem. The research on the robot vision system is of great significance to the automation and intelligence of potato plantlets breeding.

The detailed objectives of the current study can be summarized as follows. Section 2.1 describes the experimental materials used in the research. Section 2.2 introduces the test platform of the image processing algorithm in this paper. Section 2.3 introduces the hardware structure and workflow of the vision module in the self-made extended robot. Section 2.4 introduces the software of the vison module system developed on the LabVIEW platform. Sections 2.5 and 2.6 show the algorithms of image recognition and location of nodes. Section 2.7 introduces the method of transforming the spatial coordinates of the nodes acquired by the visual system into the motion parameters of the 3-PUU parallel manipulator.

2. Materials and Methods

2.1. Testing Materials

The potato test-tube plantlets were provided by the College of Horticulture and Forestry of Huazhong Agricultural University. The virus-free potato plantlets were aseptically cultured in a culture box, and the MS medium was placed at the bottom of the culture box. The size of the culture box is 80 mm × 80 mm, and there are 3 rows and 3 columns of 9 potato plantlets in each box. The age of the virus-free plantlets is 20–25 days, the height of plantlets is 7–10 cm, and the diameter of the stem is 0.6–0.8 mm. When the tissue-cultured plantlets grew 5 or 6 leaves, they would be cut into segments and each segment carried a leaf and an axillary bud and then put them into the culture medium to grow into new test-tube plantlets.

2.2. Testing Equipment

The hardware environment of algorithm developing and testing in this paper was the general-purpose computer Intel Core i7-6700HQ 2.60 GHz/16G DDR3/120G SSD/NVIDA 960 m 2 G, Windows 10 Professional, MATLAB R2017b, and LabVIEW2018. The image acquisition module used the STM32f103zet6 microcontroller to connect the SYUE MS4101V1 binocular stereo camera module (1920 × 1080 30 fps lens, 3 mm focal length, 130° wide-angle distortion-free). The image processing algorithm was implemented on the Matlab2017b platform. System software was developed by using LabVIEW2018 for connecting, coordinating, and controlling hardware and software modules. The module used the MATLAB Script node of LabVIEW to call the MATLAB image processing program.

2.3. Vision System Architecture

Figure 1 is a schematic diagram showing the installation position and structure of the vision system in a self-made parallel robot system. The vision system mainly consisted of a binocular stereo camera, a background plate, and a linear conveyor. The vision system used natural light instead of special lighting. The culture box was carried to the specified position by the lateral linear conveyor and then was fixed. The background plate was inserted between the two rows of potato plantlets by a digital servo, so that the binocular stereo camera could take a color image pair of a single line of potato plantlets.

The workflow of the robot vision system is shown in Figure 2 and described in detail.

(a)System Initialization. The system controlled the horizontal linear conveyor and drove the culture box to the specified position where the background plate could right be inserted between the two rows of potato plantlets.(b)Binocular Image Acquisition. The system controlled digital servo to drive the background plate to be inserted, and then the binocular stereo camera acquired the image pair.(c)Running Image Processing Algorithm. The vision system software invoked an image processing algorithm which identified and located the shear points and grip points of the potato plantlets and converted them into spatial coordinates for transmission to the controller of the end effector.(d)Image Acquisition and Processing of the Next Line of Potato Plantlets. After a row of potato plantlets had been expanded and when the conveyor moved a fixed distance, the background plate was inserted between the next two rows of potato plantlets, and then the procedures of (b) ∼ (c) were repeated until all the potato plantlets had been processed.

2.4. Integrated Development of Vison System Software

Based on the LabVIEW software, the upper computer program of the vision system of the propagation robot was designed. LabVIEW communicated with the STM32 development board through serial communication and sent commands to control the coordinated operation of each electrical unit. When the potato plantlets reached the designated position, the Vision System Software would control the binocular stereo camera to acquire the image pair and send the image parameters to the MATLAB program it called. After the MATLAB program finished processing the image, it would extract the point coordinate parameters and send them to the LabVIEW. When the coordinate data was received, the parallel robot and the end effector would complete the subsequent potato seed treatment. A slave computer was designed based on the STM32 development board. The electric unit included a stepping motor and a stepping motor driving board for controlling the moving conveyor of the Petri dish, a digital servo for controlling the movement of the background plate, a parallel mechanical arm (including three stepping motors), and a binocular stereo camera. The 12 V regulated DC power supplied power to each unit. After receiving the motion parameters, the STM32 development board would control the motor to rotate at a certain angle through outputting the corresponding pulse signal to run the other components.

2.5. Potato Plantlets Node Image Recognition
2.5.1. Image Preprocessing and Segmentation

Figure 3 shows the image segmentation process of potato test-tube plantlets. Inserting a black background plate between the rows of potato plantlets could weaken the background information and improved the segmentation efficiency and the result of the potato plantlets’ image. The segmentation of the potato plantlets included three parts.

Cotyledon Image Segmentation. In the potato plantlets’ image, the G component of the cotyledons was significantly larger than the other two components. The extra-green algorithm (EXG) with the feature value of 2G-R-B was used to segment the image by the OTSU threshold to obtain a binary image of the cotyledon.

Stem Image Segmentation. The segmentation result via the extra-green algorithm + OTSU algorithm was not perfect because of the uneven color of the stem itself and the reflection of light, and the segmentation result with the extra-green algorithm + OTSU algorithm is not good. The color image of the potato plantlets was processed to be a gray one, and the background image is obtained by performing a morphological “open” operation on the gray image with a 15 × 15 circular structure operator. Finally, the results of subtracting the background image from the gray image were segmented by the OTSU threshold. Thus, a binary image of the stem can be obtained.

Merging the Results of (a) and (b). The cotyledon image and the stem image were superimposed, the small connected domains were removed, and the holes were filled by a morphological operation to obtain the complete segmentation result of the potato plantlets.

2.5.2. Node Point Search

The node was a part which connects the cotyledon and the stem of the potato test-tube plantlets. The identification and positioning of nodes were a prerequisite for robotic clamping and cutting operations in expansion. Figure 4 shows the schematic diagram of the node identification principle. The node searching included the following steps.

Refine the binary image of the potato plantlets to obtain a skeleton image with a single-pixel width retaining the shape of the potato plantlets.

Search for the junction points and endpoints of the skeleton. As shown in Figure 4(a), the pixel value of the 8 neighborhoods of the skeleton junction points is not less than 3, and the pixel value of the 8 neighborhoods of the endpoint is 1. The junction points and endpoints in the image were marked in terms of the data. Only some of the discovered junction points were nodes.

Filter out nodes from the junction points. According to the morphological characteristics of the potato plantlets, the nodes were all on the main stem of the potato plantlets, and the nodes could be selected on the skeleton of the main stem in terms of this feature. As shown in Figure 4(a), a 3 × 3 rectangular template operator whose center point is 0 and the direction values of other points are 1∼8 is created. The endpoint of the skeleton was used as the starting point, the junction point was the endpoint, and the middle section was called the skeleton branch. From the starting point, the convolution result of the operator and the current coordinates can display the 8-neighbor pixel value and could also indicate the relative direction of the 8-neighbor pixel point and the center point, namely, the direction of the moving operator. Each time the operator moved, the pixel in the current coordinate was assigned a value of 0. Finally, all values of the skeleton branch were 0 except the main stem, and then the node pixel coordinates were further obtained, which are marked in Figure 4(b).

2.6. Derivation of 3D Coordinates of Junction Point
2.6.1. Hand-To-Eye Calibration

In this paper, the visual system model of the robot was established through the hand-to-eye model. It meant that the camera was installed in a fixed position and separated from the robot. Firstly, the mathematical relation between the vision system and the robot system needed to be established, which meant that the hand-eye calibration was required. The objective of the hand-to-eye calibration procedure was to determine the transformation between the 2D images perceived by the two cameras and the 3D real-world plantlet information. Firstly, the intrinsic parameters and the extrinsic parameters were calibrated using the camera model proposed by Zhang [21]. Secondly, the following method of hand-eye calibration is used to determine the hand-eye relation.

The hand-eye calibration schematic is shown in Figure 5. Let the base coordinate system of the arm be Obase-xyz, let the coordinate system of the end of the arm be Otool-xyz, let the coordinate system of the calibration plate be Ocal-xyz, and let the camera coordinate system be Ocam-xyz. H was a 4 × 4 homogeneous transformation matrix of two coordinate systems, including a 3 × 3 rotation matrix R and a 3 × 1 translation matrix t. represented the homogeneous transformation matrix of Otool-xyz relative to Obase-xyz, which could be obtained through robot kinematics. represented the homogeneous transformation matrix of Ocal-xyz relative to Otool-xyz. The robot was determined and would not change after it is installed. represents the homogeneous transformation matrix of Ocam-xyz relative to Ocal-xyz, the external camera parameter, which was obtained by calibration. represented the homogeneous transformation matrix of Ocam-xyz relative to Obase-xyz, namely, the matrix which needed to be calibrated. Equation (1) shows the conversion relation of each matrix:

In this paper, the camera did not move with the robot arm because of the hand-to-eye model, so was fixed. The matrix conversion relation at two different positions of the simultaneous calibration plate yielded

After transformation, equation (3) was obtained.

Therefore, could be obtained by solving equation (3). And could be calculated through equation (1).

2.6.2. 3D Coordinate Acquisition of Nodes

The 3D coordinates of the grasping point were determined using the binocular stereo vision system formed by the left and right cameras. The binocular stereo vision was a method of acquiring three-dimensional information of an object by simultaneously taking a picture of a plurality of viewing angles according to the principle of parallax. Point P (X, Y, Z) was the world coordinate of any spatial point, (u, v) was the pixel coordinate point of point P projected on the camera imaging coordinate system, and DISP was the disparity value. O1-x1y1zl was the left camera coordinate system, and O2-x2y2zr was the right camera coordinate system. Ol and Or were, respectively, the optic center of the camera. The distance between the two optic axes was the baseline B, and zl and zr were the optic axes. According to the positioning principle of binocular stereo vision, if any point in space could be imaged in the binocular stereo camera, the three-dimensional coordinates could be calculated using equation (4). After the binocular stereo camera collects the left and right views, it performed image processing to obtain the node pixel coordinates of the left and right views and matched the coordinates of the view nodes. Through the above calibration, the spatial three-dimensional coordinates of the nodes were obtained through matrix transformation. In this paper, Zhang’s calibration method [21] was used to calculate the participation of external parameters in the camera, and then the hand-eye calibration method was used to determine the hand-eye relationship:

2.7. Robotic Control Propagation Task

The self-made parallel robot system consisted of a moving platform, a fixed platform, and three independent motion branches connecting the two platforms. Figure 6 shows the 3-PUU parallel manipulator structure. The mechanism consisted of the A1A2A3 moving platform, three pairs of equal length connecting rods, three drive sliders of C1C2C3, and the base. The connecting rod connected the moving platform and the slider through the ball joint to form a spherical pair. A moving pair was formed between the driving slider and the pedestal column. The moving platform could move through changing the position of the three drive sliders. Since the space parallel connecting rods limited the three degrees of rotational freedom of the moving platform, the moving platform could only be three-dimensionally translated. The fixed coordinate system O-XYZ was built on the base (the starting point of sliders), and the motion coordinate system O’-X’Y’Z’ was built on the moving platform. The coordinate origin was placed at the geometric center of the moving platform, and the end was executed. The device was placed on the moving platform.

In the parallel manipulator, the displacement of the slider on the column was , the length of the connecting rod was , and the angle between the rod and the static platform was θi. The displacement of the slider on the column was . Assume that the radius of the circumcircle of the moving plane of A1A2A3 was r, and the radius of the circumscribed circle of the B1B2B3 platform was R; then the absolute coordinates of the three sliders were

Under the moving coordinate system o’-x’y’z’, the coordinate vector of A1A2A3 was

The vector relation of the moving coordinate system o’-x’y’z’ relative to the fixed coordinate system O-XYZ was

Since the moving platform had three translational degrees of freedom and no rotational degrees of freedom, the coordinate vector of the point in the fixed coordinate system was

Therefore, the inverse solution of the parallel mechanism could be represented as the constraint equation of the fixed-length rod:

According to the equation above, could be obtained:

If the coordinates of the spatial point in the fixed coordinate system O-XYZ were known, the corresponding displacement S of the driving slider would be calculated by the equation above, which was the inverse solution of the kinematic equation.

3. Results and Discussion

3.1. Camera Calibration Results

Tables 1 and 2 summarize the extrinsic and intrinsic parameters of the CCD cameras, respectively. The accuracy of the camera calibration process was evaluated by substituting the computer image coordinates of each testing point in the left and right camera image planes into the camera model established using the calibrated extrinsic and intrinsic parameters to obtain the theoretical 3D world coordinates. The accuracy of the reconstructed result was then evaluated by computing the root mean squared error (RMSE) which is 0.18 pixels.

3.2. Potato Plantlets Image Segmentation

In order to evaluate the performance of the image processing algorithm in this paper, an experiment was designed to test it. The result of image segmentation was classified into three levels: level Ι, level ΙΙ, and level ΙΙΙ. Level Ι represented the full segmentation of the foreground and background, and the foreground information remained intact with no obvious deficiency. Level ΙΙ represented the full segmentation of the foreground and background, but the foreground information was clearly deficient. Level ΙΙΙ represented the incomplete segmentation of the foreground and background or the lack of foreground information. A variety of image segmentation algorithms were tested through the vision system and the results are shown in Figure 7. Figure 7(a) displays the result of segmentation obtained through the GrabCut algorithm and it belongs to level III. Figure 7(b) displays the result of segmentation obtained by the regional growth method. It could be seen that the foreground segmentation and the background segmentation were complete, but some information such as the virus-free potato plantlets’ cotyledon was lost, and it belongs to level ΙΙ. Figure 7(c) displays the result of segmentation obtained by the FCM clustering algorithm. The result indicated that some cotyledons and stems were not completely segmented, and it belonged to level ΙΙ. Figure 7(d) displays the result of the algorithm used in this paper. The algorithm performed a complete segmentation, and the information of the virus-free potato plantlets remains intact, which could be included in level Ι. The first three algorithms were the iterative algorithm, and the number of iterations and the complexity of the image tended to affect the accuracy of the algorithm. Besides, they were time-consuming. The algorithm adopted in this research could best perform the image segmentation, and it was time-saving.

3.3. Node Image Recognition

In order to evaluate the performance of the algorithm, 20 boxes of seedlings from three different growth periods were used to test the accuracy and the time of node recognition. Table 3 shows the experimental results. The initial expansion period (15 days) was marked as P-1, the medial expansion period (22 days, optimal propagation period) was marked as P-2, and the overgrowth expansion period (29 days) was marked as P-3. Figure 8 shows one group of the test images. The nodes in the potato plantlets’ image were manually identified and marked, and the result was marked as “manual marking.” The number of nodes automatically recognized by the algorithm in this research was marked as “algorithm marking.” The error was the deviation rate between the ground truth and identification result. The time was the average time of processing each image. The algorithm had the best performance on node recognition of P-2 period potato plantlets, and the error rate was only 1.6%. There were two factors resulting in the errors. The first factor was that the algorithm failed to recognize the young and heavily occluded cotyledons. The second factor was that there were two cotyledons at the top of the potato seedlings, which might be identified as nodes. The recognition performance of the P-1 period was the second to the performance of the P-2 period, and the error rate was 5.6%, which was lower than the true value. The main reason was that the cotyledons of potato plantlets were not fully grown at this period, and some cotyledons were so tender that they were seriously occluded by the stem. The recognition performance of the P-3 period was the worst, and the error rate was 11.6%, which was higher than the real value. This error mainly resulted from the budding at the top of the potato seedlings at this period, and the points of intersection of these buds and stems were misidentified as nodes.

3.4. Node Positioning Accuracy

In order to evaluate the accuracy of the hand-eye calibration, a checkerboard with a side length of 5 mm × 5 mm was attached to the end of the robot arm. The movement of the arm to the corresponding calibration space point was controlled. The coordinates of the key points on the calibration plate were located by the binocular stereo camera, and the coordinate data in the robot coordinate system was obtained through the conversion of the hand-eye relation. 7 sets of the tests were repeated to compare the millimeter difference between the measured value and the true Euclidean distance. Figure 9 shows the checkerboard calibration plate and the error analysis diagram of the stereo camera calibration. The camera calibration results indicate that the average pixel error for binocular stereo vision positioning was 0.18 pixels. According to the statistical data of Table 4, the positioning accuracy tested by the robot vision system was about 1 mm, which could be applied in practical engineering.

4. Conclusion

This paper designed a vision system for the propagation robot of potato test-tube plantlets and the system has been installed and tested on the self-made propagation robot of potato test-tube plantlets. This system could automatically collect the binocular images of a row of potato plantlets in the culture box, segment, identify, and locate the connection points of the cotyledon and the main stem, and convert them into the three-dimensional coordinates which could be used by the parallel robot according to the calibration result of the hand-eye calibration model. This visual system adopted in this research had the best performance for potato plantlets in their best expansion period (22 days). In this period, the identification error of the node number was less than ±5%, the positioning error was less than 1 mm, and the performance time at a time was less than 1.5 s. The vision system had high positioning accuracy and excellent real-time performance. It could satisfy the performance requirements of the potato plantlets’ expansion robot.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Earmarked Fund for Modern Agro-industry Technology Research System of China (CARS-09-P08) and the Fundamental Research Funds for the Central Universities (Program no. 2019BC005).