Abstract

An intelligent and modular greenhouse seedling height inspection robot was designed to meet the demand for high-throughput, low-cost, and nondestructive inspection during the growth of greenhouse seedlings. The robot structure mainly consists of a multiterrain replacement chassis, an electronic control lift image acquisition support, and a quick disassembly mechanism. SolidWorks was used to design the robot and Adams was used for motion simulations. Based on STM32 and Raspberry Pi as the core, the robot is equipped with various sensors to build a reliable control system for intelligent navigation for inspection tasks as well as acquisition of high-quality images and environmental information data of seedling crops. The developed growth point detection algorithm based on the EfficientNet deep learning network can efficiently measure the heights of seedlings and the application of the host software and cloud server makes it easy to monitor and control the robot and store and manage various data. The results of the greenhouse experiment showed that the robot has an average battery life of 5.2 h after being fully charged, with satisfactory motion stability and environmental adaptability; the environmental information data collected were valid, and errors were within the acceptable range; the captured seedling crop images were of high quality, and the seedling height data obtained through algorithm analysis were valid and reliable. The robot is expected to be an intelligent assistant for seedling research and production.

1. Introduction

Seedling cultivation is a key step in vegetable production and an important bridge between the source of seeds and vegetable products [1]. Although the vegetable seedling industry has developed rapidly in recent years, but most of the enterprises’ seedling production management is mainly based on experience, yielding a low seedling growth rate and labor productivity; this is still far from agricultural modernization [24]. With the increase in labor costs, and the rapid development of information technologies such as the Internet of Things and big data, the intelligent and industrialized seedling production has become an inevitable trend in the development of the modern vegetable industry. Continuous monitoring and control during seedling cultivation is an effective way to improve seedling quality and survival rate, and data collection and analysis enable us to effectively perceive crop growth conditions and then take corresponding prevention measures [5, 6]. A seedling factory deploys a large-scale sensor network for intelligent monitoring purposes, which is considerably more efficient, accurate, and real-time than the manual method; however, such networks are highly complex in wiring and are costly. An intelligent mobile robot is a more flexible and cost-effective solution.

Currently, research is being conducted on inspection robots for crop growth monitoring. For example, Guo et al. designed a multi-degree-of-freedom robot system for greenhouse facility image acquisition and environment monitoring, which can accurately acquire images and data through the perception layer, and send the data for analysis and storage via a wireless network, thus enabling fine collection of data perception environment monitoring data for greenhouse production [7]. Liu et al. designed a monitoring device based on a mobile robot that enables remote real-time monitoring of the greenhouse environment [8]. Li et al. developed a mobile and suspended-rail crop growth and environmental information monitoring system, which enables integrated monitoring of crop growth and environmental information in horticulture facilities through multisensor information fusion [9]. Han et al. developed an indoor inspection robot that can conduct autonomous inspections based on pre-laid electromagnetic guide wires and transmit the collected temperature, humidity, CO2 concentration, and other information to a cloud server for growers to view from WeChat [10]. Lu et al. designed and manufactured a small wheeled tobacco plant protection machine for the high-ridge-furrow environments, which has improved the performance of the transmission system and steering system through a clever mechanical structure to meet the stability requirements for field operations [11]. Barker’s team designed a multisource detection sensor system based on a vehicle-mounted platform to collect images of seedlings from multiple angles [12, 13]. The Sunti team’s agricultural robot platform, built with corrosion-resistant aluminum profiles in the main body, using an Arduino development board as the control chip, and an integrated image processing algorithm in the host computer, can be used to identify and pick small fruits [14]. Bai et al. used a robotic phenotype collection platform to carry a high-throughput multisensor system that consists of five sensor modules for measuring crop canopy traits from field plots and geo-referencing sensor measurements using GPS as well as incorporates two environment monitoring sensors [15]. Atef et al. used a robot to detect leaf traits in greenhouse-grown maize and sorghum. The robot automated the measurement of plant leaf characteristics using a four-degree-of-freedom manipulator with a portable spectrometer and a thermistor for leaf temperature measurement [16]. In the progress both at home and abroad, the existing robots for crop growth monitoring are not well adaptable to greenhouse environments. Because they perform poorly in seedling growth diagnosis and real-time system control and data management, there is still large room for development and improvement.

Among the many phenotypic parameters that reflect the growth status, the seedling height is an extremely critical factor. The measurement of seedling height can provide an important basis for the quantitative analysis of the sound seedling index and also help in the cultivation of seedlings [17]. However, there are very few studies on intelligent seedling height inspection robots for greenhouse environments, and directly introduced inspection robots for other purposes cannot adapt to the special environments of seedling greenhouses and perform poorly in seedling height measurement. Therefore, we developed an intelligent greenhouse seedling height inspection robot for seedling growth monitoring. The robot can walk around agilely and intelligently in the seedling greenhouse, collect seedling growth information and environmental information in a comprehensive and stable way, and analyze and store seedling height data in real time using cloud-based image processing algorithms and interactive software. The robot is expected to become a powerful assistant for greenhouse crop research and cultivation personnel by improving their labor efficiency and reducing their labor intensity, thereby promoting the intelligent process of greenhouse crop research and cultivation.

2. Materials and Methods

2.1. Performance Requirement Analysis

Seedling production is mainly carried out in multispan greenhouses or glass greenhouses. A seedbed is generally 1.7 m wide, 18–20 m long, and 0.65 m high. A large number of seedbeds are evenly arranged and distributed in a huge space of several hundred to several thousand square meters, with an average spacing of approximately 0.6 m between seedbeds. The seedbeds can be moved around slightly to facilitate workers’ operations. The robot walks around the seedling greenhouse autonomously, collecting environmental information and taking photos of seedlings and uploading them to the cloud server, wherein the images are processed and analyzed to obtain seedling growth data and stored for users to view in real time through the interactive software on the client computer. Based on task requirements, the robot should provide the following features: (1) the robot should have good trafficability when moving around and adapt to different types of greenhouse road surfaces; (2) when facing obstacles or potholes on the road, the robot should be able to overcome obstacles and remain stable when being disturbed in its movement; (3) the shooting height of the image acquisition device can be adjusted to adapt to different greenhouses and seedlings’ growth periods; (4) the robot can move autonomously in a complex environment; (5) an accurate and efficient image processing algorithm should be developed for in-situ detection of seedling heights; and (6) greenhouse environmental data, seedling images, and seedling height data shall be transmitted, stored, and managed via a wireless network.

2.2. Robot Body Design
2.2.1. Overall Structural Design

The robot has a modular structure that consists of a multi-terrain replacement chassis, an electronic control lift image acquisition bracket, and a quick release mechanism. The multi-terrain replacement chassis consists of components such as the drive part, motion part, suspension part, and frame; the electronic control lift image acquisition bracket comprises components such as the motor, telescopic structure, and camera mount; the quick release mechanism comprises components such as the split connection part and reset pin. As shown in Figure 1, the mechanical structure of the robot is drawn and machined using SolidWorks software.

The height of the seedbeds in the greenhouse is approximately 600–700 mm above the ground; the growth height of the seedlings is approximately 50–150 mm; the minimum imaging distance of the camera is 250 mm; and the spacing between the bottoms of seedbeds is approximately 600 mm. To meet the actual crop inspection and image acquisition requirements and facilitate the handling and movement of the robot, the length, width, and height of the designed robot were 570, 420, and 900–1300 mm, respectively, and the weight was 23 kg. The main structural parameters of the robot are listed in Table 1.

2.2.2. Shock-Absorbing Wheel Replacement Chassis

The chassis is responsible for the movement of the robot and overcoming obstacles. Due to complex terrains in different greenhouses, to adapt to different road surfaces, the robot chassis is powered by four 3.1 Nm 57 stepper motors that are coupled with different tires through a designed 8 mm shaft hole coupler for wheel replacement. For a greenhouse on paved roads with large spacings between seedbeds, low-cost ordinary round rubber tires can be used. In the case of narrow and complex roads in a greenhouse, a McNamee wheel chassis can be used for agile multidirectional movement based on its high mobility. A track chassis is suitable for a greenhouse on nonpaved roads and can maintain a tight grip on uneven roads by adjusting the belt tensioner. The design of the replacement chassis structure significantly improves the robot’s adaptability and trafficability. Figures 2(a)2(c) show the tire chassis, McNamee wheel chassis, and track chassis, respectively.

The suspension mechanism is responsible for maintaining the overall stability of the robot by ensuring that the violent shaking of the image acquisition camera caused by robot movement is minimized when the tire chassis is in contact with the ground [18]. The chassis features a four-wheel independent suspension mechanism with the tires mounted on the crank guide-bar suspension mechanism composed of rocker links, bearings, and springs, and swinging around the suspension pivot point with rocker links when passing over potholes, thus enhancing the grip of the chassis on the ground and absorbing shocks. The structure of the chassis suspension mechanism is shown in Figure 3.

2.2.3. Electronic Control Lift Image Acquisition Bracket

A multisource camera is mounted on this bracket for image acquisition. A 24 V electric push rod with a stroke of 400 mm is used to constitute an electrically controlled image acquisition device. The two poles of the DC motor of the push rod are, respectively, connected to two sets of relays and then connected to the positive and negative poles of the power supply. By controlling the conduction and disconnection of the two sets of relays, the switching of the positive and negative poles of the DC motor is completed, so that the speed of the electric push rod is 12 mm/s forward or reverse height adjustment. The RealSense camera and Kinect camera are fixed onto the image acquisition device through different brackets to collect RGB-D information of seedling images. As shown in Figure 4, the electronic control lift image acquisition bracket allows the camera to acquire the images of seedlings in the height range of 900–1300 mm.

2.2.4. Quick Release Mechanism

The quick release mechanism is mainly responsible for the separation/combination of the robot chassis from/with different end actuators. It is mainly divided into upper and lower parts. The upper part is a boss structure with pin holes on the side, and its upper surface is connected with the connecting seat of the end effector; the lower part cooperates with the upper connecting body and is connected to the chassis through the base. After the two are inserted, the locking is completed by the spring return pin. Owing to the aluminum alloy, it features a lightweight design with a high carrying capacity. As shown in Figure 5, the quick release mechanism achieves a split design for the robot, which enables the end actuator to be combined with or separated from the chassis within seconds, thus greatly facilitating the handling and storage of the robot.

2.2.5. Robot Motion Simulation

Because the robot needs to be as stable as possible when it is moving around to reduce the jitter of the camera module to acquire higher quality images of the seedlings, we used Adams to perform kinetics simulations and verify the reliability of the suspension mechanism and the entire system as a whole. We constructed a map with uneven surfaces to simulate actual road surface undulations, modeled the robot in equal proportion using SolidWorks, and then imported the model into Adams. The kinetic parameters were set according to the actual parts mating and material characteristics. The multiple-degree-of-freedom spring force was decomposed to the vertical direction as its equivalent combined force. The driving force parameters were set according to the mechanical characteristics of the motor. For kinetic analysis of the chassis, the robot mass m was set to 23 kg; the global gravitational acceleration was set to 9.81 kg/s2; the friction coefficient μ was set to 0.3, which is the static friction coefficient between rubber and concrete road; and the direction perpendicular to the upward direction of the base plate was chosen as the main reference direction to reflect the undulation degree of its center of gravity.

The simulation results are shown in Figure 6. The vertical coordinate of the curves is the displacement of the center of mass of the bottom plate in the vertical direction, and the horizontal coordinate is the time used by the robot to move forward. The red curve is the result without the chassis suspension mechanism, while the blue curve is the result with the chassis suspension mechanism. According to the simulation results, the chassis suspension mechanism has improved the overall stability of the robot, especially on undulating roads where it can effectively isolate the shocks of the vehicle from the ground.

2.3. Robot Control System Design
2.3.1. STM32-Based Motion Control Module

As shown in Figure 7, during the greenhouse production, it is usually necessary to inspect the seedbeds in specified areas and design an intelligent navigation mode based on the operational requirements in the greenhouse. In this mode, the robot moves to the specified point on the radar map along the planned route to complete the task.

The control system for the intelligent navigation mode features a distributed design for motion control through intermodular communications and collaborations. The main control unit employs a STM32F407 series microcontroller with low power consumption, high stability, and rich interfaces, as well as 114 programmable I/O ports, 17 timers, and 17 communication interfaces, enough to meet the control and communication requirements. The processor is from the Raspberry Pi 4B series and integrates 2 HDMI ports, 4 USB ports, and wired and wireless network interface cards that can transmit HD video streams as well as send and receive various data simultaneously. A LIDAR is used to sense the robot’s surrounding environment. A Silan A1 LIDAR with a sampling frequency of 8000 times per second and a scanning frequency of 5.5 Hz is used to collect the road data within 12 m. The inertial sensor (IMU) is responsible for sensing the attitude of the robot during its motion. A nine-axis inertial sensor is used as the IMU to measure the angular velocity, acceleration, attitude angle, etc., during robot motion.

The intelligent navigation feature is based on the simultaneous localization and mapping (SLAM) technique. The robot calculates its own position while building a map of the environment based on the information from the LiDAR and inertial sensors and odometer, and then it navigates to the specified points based on the planned route. A ROS environment is built based on Raspberry Pi Ubuntu 20.04 system to provide the mapping and navigation features of the robot. The GMapping algorithm [19] is used by the robot for mapping, synchronous localization, and map saving. A GMapping node is built to import the data of the LiDAR and motor odometer in real time, and after converting the radar coordinate system to the chassis coordinate system, a map is built using the IMU’s attitude information. The control system loads the built map, which provides the initial position, direction, and target point of the robot, planned a route using the algorithm [20], obtained the chassis motor speed control instruction, and sent the instruction to the STM32 controller via a serial port.

After receiving the motor control command, the STM32 controller generates PWM signals through the four channels of the advanced timer and then drives the chassis motor after power amplification by the drivers of the four stepper motors, thus enabling the robot to navigate around.

2.3.2. Raspberry Pi-Based Image Acquisition Module

As shown in Figure 8, the image acquisition module system of the robot consists of a camera system, an information processing unit, and a data transceiver module. The camera system consists of a RealSense and a Kinect camera mounted at the end of the abovementioned electronic control lift image acquisition bracket and a surveillance camera at the front of the chassis. Among them, Intel’s RealSense D415 camera can acquire RGB-D information at 1280 × 720 resolution and the Azure Kinect camera can acquire image color information at 4096 × 3072 resolution and image depth information at 1024 × 1024 resolution. The Kinect camera can provide better-quality depth images, and RealSense is used to obtain better-quality color images with different imaging principles. The purpose of using these two RGB-D cameras at the same time is for subsequent image fusion to obtain better RGB-D image quality to meet the needs of crop image acquisition. The surveillance camera features a LeSports 3-in-1 camera, which is used to transmit the picture in front of the robot in real time and provide the remote monitoring function of the robot.

With a limited space inside the robot, the information processing unit uses a Raspberry Pi 4B with dimensions (L × W × H) of 88 mm × 58 mm × 19.5 mm. Its 1.5 GHz 64-bit quad-core processor and TCP/IP protocol-based wireless network communication support can meet the needs for image acquisition and transmission.

The data transceiver module is a 5-mode and 13-frequency 4G network transmission module, which features ultralow latency, data encryption, and stable signals, and can meet the networking requirements of the robot.

The image acquisition system runs in the Raspberry Pi Ubuntu 20.04 environment. The image acquisition code is compiled with Python 3.7 and linked to the camera video stream for image acquisition by calling OpenCV and NumPy interface library functions. During image acquisition, the RealSense camera or Kinect camera is connected to Raspberry Pi through a USB3.0 interface, and images of seedling crops are acquired and then uploaded to the cloud server for storage and processing through the 4G transmission module. The surveillance camera is connected to the Raspberry Pi via an USB interface and a video stream is established with the cloud server via TCP/IP protocol to transmit the robot surveillance images in real time for retrieval and viewing.

2.3.3. Environmental Information Collection Module

The environmental information collection mainly focuses on light intensity, temperature, humidity, and carbon dioxide (CO2) concentration in the greenhouse environment, which have a great impact on crop growth. The environmental information collection system consists of a sensor, STM32 processor, and 4G transmission module, as shown in Figure 9. A light intensity sensor based on the BH1750 chip is used to measure the light intensity within the range of 0–65535 lx; a temperature and humidity sensor SHT30 is used to measure the temperature and humidity within the ranges of −40–125°C and 0%–100%, respectively, and with the accuracies of ±0.3°C and ±2%, respectively. A CCS811 sensor is used to measure the CO2 concentration within the range of 400–5000 mg/m³.

When collecting environmental information, the sensor converts the received environmental information analog signals to digital signals through the AD conversion chip and sends the data to STM32 through TTL serial communication or I2C communication for them to upload the environmental data to the cloud server for storage through the 4G transmission module.

2.3.4. Power Management Module

The mobile robot is powered by a 24 V vehicle lithium battery. Due to different voltage requirements of various electronic control systems and sensors, the PW2902, PW2183, and PW2052 chips are connected in a series-parallel combination to build a power management module that supplies 24 V, 12 V, and 5 V high-current outputs, and provides rectification, overvoltage, and overcurrent protection and reverse polarity protection features. As shown in Figure 10, the step-down regulator feature is simulated electrically using the Simulink tool in MATLAB software to simplify the analog chip circuitry. The simulated waveform results are shown in Figure 11. It can be seen that when the power input is 24 V 0.6 A, the voltage output by the simplified circuit is stabilized at 12 V in a very short period of time, and its waveform is in line with expectations. This design meets the functional needs of each module for power supply.

2.4. Robot Algorithms and Software System Design
2.4.1. Host Computer Control Software

As shown in Figure 12, the robot can be controlled and managed through the PC-based host software, which greatly enhances the real-time performance and convenience of system. A mobile laptop serves as the host computer (equipped with a Core i7-3632 CPU, 2.20 GHz, 12 GB RAM, 64-bit OS). The robot’s host control software for Windows is developed in PyQt5 + Qt Designer environment using Python language. The software is designed with an easy-to-understand and simple GUI, which makes it easy for the user to get started, and integrates various features such as robot control and operation status monitoring, image shooting control, crop image data viewing and management, etc.

The Raspberry Pi processor functions as an intermediary for wireless control of the robot motion by the host PC. When the host PC and the Raspberry Pi are at the same hotspot, the host PC establishes a connection with the Raspberry Pi by accessing its IP address. To perform a control operation, the host computer sends a “control instruction + check code” to the Raspberry Pi IP in the form of a character string. Upon reception of the character string, the Raspberry Pi decodes it into control instruction characters and then sends it to the STM32 via a USART serial port to control the motor operation, thereby controlling the robot operation. After the “Start” button in the host software is clicked, the robot moves along the route of destinations predefined in the ROS system. After the “Stop” button is clicked, the robot stops moving. In addition, buttons such as “forward/backward” and “turn around” in the host software can be used for manual and remote control of the robot.

Features of the host computer such as robot operation monitoring, seedling growth status diagnosis and analysis, crop information, and environmental data management are provided by the server-side program deployed on the cloud server. The server-side program is developed using Node.js language, which uses a socket interface for communication. It features a Nginx architecture and integrates a MySQL database to build a cloud data storage and processing system that serves as a bridge between the robot and the host software.

The surveillance camera at the front of the chassis uploads the captured images to the server and establishes a video stream, which is accessed by the host computer to monitor the robot operation. The environmental information data collected by the robot sensors are uploaded to the server every hour. After the “Shoot” button in the host software is clicked, the collected seedling images would be uploaded to the server and saved; after clicking the “Data Analysis” button, the service program would call the image processing algorithm to detect and analyze the uploaded images and obtain the biomass data of the plant such as seedling height. After the “Save Results” button is clicked, the data in the server database can be exported to an Excel table for easy viewing and management.

2.4.2. Image Processing Algorithm for Seedling Height Detection

Seedling height is a critical biomass parameter in the determination of the seedling growth quality. Seedling height is the distance from the base of the plant to the top of the main stem, i.e., the main stem growth point [21]. The identification and localization of growth points is the key to seedling height measurement. In this project, we tested five deep learning networks and selected the EfficientNet network [22] to identify seedling growth points. EfficientNet is a weighted bidirectional feature pyramid network (BiFPN) as shown in Figure 13. The network allows simple and fast multiscale feature fusion; second, the network incorporates a compound scale dilation method that uniformly scales the resolution, depth, and width of all backbone, feature, and prediction networks. With this new idea, the detection accuracy of the EfficientNet network has been greatly improved compared with other networks.

We used the training method of migration learning and the weights of the ResNet and VGG network models trained with the Pascal VOC dataset [23] as the initialized weights of the network models. Labellmg software was used to label the color images of cucumber seedlings’ growth points and save them in an xml file. The labeled dataset in .xml format and the original images were used in a ratio of 9 : 1 between the training set and test set to create a dataset in VOC2007 format, which was imported into the network model for training to obtain a corresponding weight file for the detection model, where all the parameters for the training process were saved. By calling this weight file, it is possible to detect the growth points of fruit and vegetable seedlings. Relevant parameter settings for network training were as follows: the total number of iterations is 1200, of which the first 800 rounds freeze some layers in the network and set the learning rate to 5e − 4; the next 400 iterations unfreeze all layers and set the learning rate to 1e − 5 for training, and set the training batch size to 4. The CycleGAN model was developed under the PyTorch 1.13.2 deep learning framework and based on a Windows 10 64-bit operating system, Intel i5-10400F, GPU, and NVIDIA GeForce GTX 1660 SUPER graphics card.

To solve the overfitting issue caused by a small amount of data and to improve the training effect of the model and the accuracy of the results, the dataset was enriched by data augmentation. The complexity of the samples was increased using traditional dataset augmentation methods, e.g., image rotation (45°, 60°, 90° rotation), brightness adjustment (0.8x and 1.3x), contrast enhancement (0.8x), addition of Gaussian noise (standard deviation 0.1), and mirroring (horizontal rotation), and the dataset was expanded to 8 times of its original size. The preprocessed images were manually labeled using Labellmg software. Mark the growth points of multiple seedling images in a single image with rectangular boxes and name them growpoints, and save the results as .xml files. After the annotation is completed, each image corresponds to a .xml file with the same name. A total of 1600 near-growth point color images were produced. To ensure the dissimilarity of the data, 90% of it were used for the training set and 10% for the test set, which were put into a deep learning network model for training.

A schematic diagram of seedling height measurement is shown in Figure 14. The center of the growth point prediction box identified by the EfficientNet network serves as the pixel coordinates of the growth point. The spatial coordinates of each pixel point in the depth image captured by the Kinect camera can be calculated using equation (1).

By mapping the pixel coordinates of the growth point to the depth image, the spatial coordinates of the growth point can be extracted [24], and then, the depth between the growth point and the camera plane () can be calculated. If the measurement environment remains unchanged, the distance from the camera plane to the top of the seedling pot can be measured manually, and the seedling height can be calculated using equation (2).

3. Prototype Test Results

3.1. Robot Operational Stability Test

The operational stability test of the robot prototype was conducted in the intelligent glass greenhouse of Huazhong Agricultural University during May 24–26, 2022, as shown in Figure 15. On the three days, the average temperatures of the test environment were, respectively, 25.6, 25.8, and 29.8°C; the average relative humidity values were, respectively, 76%, 78%, and 85%; the weather was sunny and the light condition was good; the floor of the greenhouse was a cement pavement floor; the height of the seedbed was 750 mm; and the spacing was 800 mm. The test started with the robot being fully charged. It moved around the seedbeds in the intelligent navigation mode to collect images and its operation was monitored from the host software. The robot was operated from the fully charged voltage of 24.4 V till 22.8 V until it ran unstable.

The test results showed that the robot featured an average battery life of 5.2 h in the greenhouse environment and high trafficability and stability during its operation. It was highly reliable and able to perform specified operations for crop phenotype detection and environmental data collection. The robot, server, and host computer communicated with one another stably and properly even under high temperature and humidity conditions, indicating that it could adapt to complex environments.

3.2. Environmental Data Validity Testing

The environmental data collected during the robot prototype test were collated, and the values from an environmental monitor from Changzhou Ekos Electronic Technology Co., Ltd. were used as comparison values to verify the accuracy of the environmental sensors, including temperature and humidity sensors, lightness sensors, and CO2 sensors. The resolution and range of the temperature measurement were 0.01°C and −40–60°C, respectively; the resolution and range of the relative humidity measurement were 0.01% and 0%–100% RH, respectively; the resolution and range of the illuminance sensor measurement were 10 lx and 0–100000 lx, respectively; the range of the CO2 sensor was 0–5000 mg/m³. The environmental data collected by the robot prototype were used as the measured values, and the above data were updated every 1 h. By comparing the measured values of the greenhouse environmental data with the comparison values (Figure 16), we concluded that the following: the maximum difference between the measured value of temperature and the comparison value was 1.96°C, with a maximum relative error of 5.8%; the maximum difference between the measured value of humidity and comparison values was 1.79%, with a maximum relative error of 3.0%; the maximum difference between the measured value of light intensity and comparison values was 333 lx, with a maximum relative error of 2.6%; and the maximum difference between the measured value of CO2 concentration and comparison values was 66 mg/m³, with a maximum relative error of 8.6%.

3.3. Seedling Height Detection Validity Test

Phenotypic parameters of the seedling crop canopy were measured in the greenhouse for early Jia 8424 watermelon seedlings at the one-true leaf and one-apical bud stage, and Fengle Golden A pumpkin seedlings at the one-true leaf and one-apical bud stage as well as two-true leaf and one-apical bud stage. While moving around, the robot used the Kinect camera to acquire RGB color images and depth images; then it used the image processing algorithm to identify the growth points of seedlings at different growth stages and finally measured seedling heights.

To verify the accuracy of the growth point detection algorithm, 49 watermelon seedlings at the young seedling stage, 45 watermelon seedlings at the one-true leaf and one-apical bud stage, 44 pumpkin seedlings at the one-true leaf and one-apical bud stage, and 47 pumpkin seedlings at the two-true leaf and one-apical bud stage were randomly selected for the test, as shown in Figure 17.

It is difficult to intuitively draw the pros and cons of each model algorithm by comparing the detection images of the five models for watermelon seedlings [25]. Using 160 images of watermelon seedlings as the test set, the five networks were quantitatively evaluated using AP (average precision) and F1 parameter (an evaluation index that comprehensively considers precision and accuracy). The test results are shown in Table 2. It shows that the AP and F1 values of the EfficientNet network for the detection of watermelon seedling growth points are higher than those of the other four target detection models, and the detection time of the five models is not much different. Therefore, the EfficientNet network is determined as the detection model for the growth point of fruit and vegetable seedlings in this paper.

To verify the accuracy of the seedling height calculation algorithm, the heights of the abovementioned tested seedlings were measured. The height from the growing point of the seedling to the surface of the plug was measured using a vernier caliper with an accuracy of 0.1 mm, which served as the actual seedling height value. To make the test results more intuitive, the actual values of seedling heights measured manually were used as the x-axis and the seedling heights calculated using the proposed algorithm were used as the y-axis to draw scatter plots as shown in Figures 18(a)18(d). These figures are the scatter plots for watermelon seedlings at the young seedling stage, watermelon seedlings at the one-true leaf and one-apical bud stage, pumpkin seedlings at the one-true leaf and one-apical bud stage, and pumpkin seedlings at the two-true leaf and one-apical bud stage, respectively.

To compare the agreement between the seedling height calculated using the proposed algorithm and the manually measured seedling height values, a least squares regression analysis was conducted to linearly fit the scatter plots of the two datasets, and the corresponding goodness of fit R2 and root mean square error (RMSE) between the predicted and true values were calculated. Both R2 and RMSE are indicators used to evaluate and describe the degree of agreement between the two datasets. A higher value of R2 indicates a better fit between the predicted and true values. Equations (3) and (4) are used to calculate R2 and RMSE, respectively.

The test results of seedling height calculations are shown in Table 3, which indicate that the R2 values of the seedling heights measured using the proposed algorithm and the seedling heights measured manually were greater than 0.9 for the four seedling stages of fruit and vegetable seedlings, and the values of RMSE for these stages were 2.81, 3.69, 3.43, and 4.83, respectively. The fitted equations obtained were near the direct proportion straight line with a slope of 1. The results confirm that the seedling heights calculated by the algorithm of this paper are accurate. In Table 3, “W-YOUNG” represents watermelon seedlings in the seedling stage, “W-ONE-TRUE” represents watermelon seedlings in the one leaf and one heart stage, “P-ONE-TURE” represents pumpkin seedlings in the one leaf and one heart stage, and “P-TWO-TRUE” represents pumpkin seedlings of the two leaves and one heart stage.

4. Conclusion

We designed an intelligent and modular greenhouse seedling height inspection robot to acquire images of seedlings and environmental data during seedling cultivation in a greenhouse. The test results confirmed the following. First, the robot is highly versatile and stable. Its multi-terrain replacement chassis can adapt to different types of road surfaces in the greenhouse and its independent suspension structure design enhances the stability of the robot in motion, so that the robot can complete inspection tasks in various common greenhouses. The designed electronic control lift image acquisition bracket can capture high-quality images of seedling crops, which meets the shooting requirements of greenhouse seedlings at different heights. Second, the robot can collect data in a stable way. Through the deep learning algorithm based on the EfficientNet network to identify the growing point, the plant height data of the seedling can be accurately measured and the efficient measurement of the in-situ crop is realized. Moreover, its environmental data collection module can accurately obtain light intensity, temperature and humidity, and CO2 concentration data in the greenhouse. At last, the robot system features a high integration level and high real time performance. The host computer and cloud server can connect the system modules in real time, thus making it easier for the user to monitor and control the robot as well as analyze and manage data. In the follow-up, image fusion technology can be used to improve the quality of the collected images to further improve the accuracy of algorithm recognition; by further integrating each module and upgrading the human-computer interaction software, the integration of the robot system can be higher, and the operation efficiency can be improved while being convenient to use. The robot plays a significant role in assisting greenhouse seedling cultivation research and promoting mechanization and intelligence of seedling cultivation.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this article.

Acknowledgments

This work was supported by the National Key Research and Development Program of China (2019YFD1001900), the HZAU-AGIS Cooperation Fund (SZYJY2022006), the National College Student Innovation and Entrepreneurship Training Program of Huazhong Agricultural University (202110504059), and the Hubei Provincial Key Research and Development Program (2021BBA239).