Abstract

Localization is an important method for autonomous indoor robots to recognize their positions. Generally, the navigation of a mobile robot is conducted using a camera, Lidar, and global positioning system. However, for an indoor environment, GPS is unavailable. Therefore, a, state-trajectory tracking method is utilized based on a Lidar map. This paper presents the path following of an autonomous indoor mobile robot, that is, a shuttle robot, using a state-flow method via a robot operating system network. MATLAB and Linux high-level computers and an inertial measurement unit sensor are used to obtain the Cartesian coordinate information of a bicycle-type mobile robot. The path following problem can be solved in the state-flow block by setting appropriate time and linear and angular velocity variables. After the predetermined time, the linear and angular velocities are set based on the length of the path and radius of the quarter-circle of the left and right turns in the state-flow block, path planning, which can execute the work effectively, is established using the state-flow algorithm. The state-flow block produces time-series data that are sent to Linux system, which facilitates real-time mobile platform path following scenario. Several cases within the path-following problem of the mobile robot were considered, depending on the linear and angular velocity settings: the mobile robot moved forward and backward, turned in the right and left directions on the circular path. The effectiveness of the method was demonstrated using the desktop-based indoor mobile robot control results. Thus, the paper focuses on the application of the state-flow algorithm to the shuttle robot specifically in the narrow indoor environment.

1. Introduction

New models are often encountered in the agricultural, protection, and precision sectors. Computers, artificial intelligence, and big data technologies have been used to develop intelligent farming systems, particularly for farming robots [1]. With the rapid development of advanced technologies, many new techniques, such as big data, artificial intelligence, the Internet of Things, machine vision, and agricultural robotics, have been applied to the agricultural production [2]. These robots can be classified as grafting robots, robots for transport, spraying robots, robots to collect agricultural information, weeding robots, and seeding robots [3]. Robotics and sensing technologies have led to a technological revolution in agriculture [4]. Unmanned guided vehicle (UGV) projects are crucial for the ongoing research. Many technologies relevant to companies (Google Car, VIP, Tesla, Mercedes, Renault, PSA, Nissan, Fiat, etc.) have already been implemented, according to the study by Khalifa et al. [5]. Stanley control and lateral speed control were introduced [5], however, they were controlled using the vector interval rather than the time interval. Recently, path tracking has become an important subject in the engineering problems. This method can be applied to both agricultural and marine environments. A conventional method for automatic path tracking was introduced [6]. Recently, linear active disturbance rejection control (LADRC) was designed for the trajectory tracking control problem of a differential-type model [7]. A recursive technique was applied to the path-tracking problem of the differential-type model by composing a chained form of the system [8]. Recently, the pure pursuit method has been widely used for the path tracking of outdoor and indoor mobile robots. However, these algorithms and methods are controlled using the vector interval, which causes the mobile platform to vibrate during implementation. Automated navigation technology plays a crucial role in the autonomous navigation of the mobile robots in agriculture [9]. Researchers have developed several automated path-following methods based on various navigation principles. Using an electromagnetic sensor, Song [10] developed an automatic navigation of the spray robot. It is difficult to maintain a navigation system [11]. The exploration of autonomous robots is challenging, particularly in the agricultural field, where the environment is organized by half an uneven surface that changes daily owing to dust and fog, affecting sensor observations and a lack of operational feature localization [11]. Autonomous agricultural robots in this sector are an important part of efficient exploration strategies because battery problems limit their operating time. The advantage of the pure pursuit algorithm is that the path of the mobile robot follows waypoints. Several methods have been applied to the agricultural field operations for robotic localization. Localization systems such as the global positioning system (GPS) [12], real-time kinematic GPS (RTK-GPS) [13], geographic information systems [14], and Lidar-based systems have been applied to agricultural mobile robotic systems [15]. Among these methods [16], the RTK-GPS method has been proven to achieve the highest accuracy in robot localization efficiency [17]. However, this method has an extremely high cost. Hence, from the perspective of commercialization, this method is rarely used in the field applications. Before a test, state-flow algorithms can be tested for their efficiency and effectiveness through simulations. Furthermore, a test bed can be constructed to match the specific indoor utilization. The mobile platform presented in this paper is called a unicycle-type mobile platform and has been researched in several studies [6, 1825]. A recent algorithm has been applied to unicycle-type system modeling in the several studies. The recent linear active disturbance rejection control (LADRC) has been applied to unicycle-type system modeling [26]. For optimal control, a gradient descent algorithm has been studied to solve the trajectory problem [27]. This paper introduces a novel state-flow algorithm recently introduced by MathWorks. Several researchers [28, 29] have investigated the state-flow method, namely, its applicability to flight control, hybrid energy control system design, and simulation. The benefit of the state-flow method is that the platform is controlled using time interval control rather than vector interval control, which significantly reduces the vibration of the platform via the waypoint follower: the pure pursuit and, Stanley methods. Therefore, we determined that the state-flow method is appropriate for the path-tracking problem in an indoor mobile AGV environment. This is because the space in the indoor environment is very narrow, and the path tracking method in the state-flow follows an imaginary path smoothly and accurately. Thus, stationary obstacles can be easily avoided using the state-flow method. In this study, we demonstrated that the proposed method can easily and accurately track an imaginary path by setting a rope during the experiment. The paper presents the results of the mobile platform following the imaginary path (previously, the rope was set) smoothly and accurately. A similar wireless communication network is described [30].

Thus, the contribution of this paper can be expressed as follows:(1)Despite the various aforementioned control and estimation methods, the experiment was conducted focusing on the state-flow introduced in MATLAB/Simulink recently.(2)The actual and reference path is located in the Lidar map which reflects the real measured environment.(3)The path tracking problem can then be solved in a state-flow block by setting appropriate time, velocity, linear velocity, and angular velocity variables. Thus, the effectiveness of the method is verified by the results of a desktop-based indoor mobile robot control.

In summary, Section 2 describes the hardware configuration and how the platform connects to high-level computers, as well as modeling a bicycle-type system. The results obtained by testing the obstacle avoidance path and the indoor driving path using the radius of a circle after applying the algorithm to the state-flow are provided in Section 3. Finally, conclusions are given in Section 4.

2. Materials and Methods

2.1. Autonomous Navigation Systems and Proposed Methodology

In this section, we introduce the trajectory control system, kinematics, and motion model of the mobile robot used in this study. The block diagram is similar to that of the proposed kinematic motion control, and the basic indoor mobile robot chassis structure and reference axis definition are shown in Figure 1. In this study, a unicycle-like mobile robot model was selected because it is the most common type of mobile robot used in various applications such as surveillance, floor cleaning, and autonomous wheelchair applications. Unicycle mobile robots are used in agricultural application [31, 32]. Path planning can be classified into global and local-path planning. The selected kinematic model for the indoor mobile robot is represented by Equation (1).

The hardware configuration experimented in the Chungbuk National University is given as Figure 2. The function of high level computer reduces the speed error between each motors, while the function of low-level controller is to track the reference speed of the eight motors. Driving devices and steering devices are each four wheels described in Figure 2. The top, bottom, and front view of the four wheel mobile platform is described as Figures 3 and 4.

The actual view of the four wheel mobile platform is described as Figure 5. The rear two wheels are idler type, and front two wheels can be controlled by linear and angular velocities.

Trajectory generation with constraints, a predetermined path is obtained using the state-flow method. First, the linear and angular velocities and are computed spatially along the entire path. Equations (2)-(4) are used for controlling the radius of the circle by changing the linear and angular velocities inputs. Using the well-known programing environment Simulink, time moments are assigned to these velocities to generate and . At all times, a unicycle robot has a linear velocity and angular velocity , and their values can be generated on a different trajectories over time. The spatial variables and in Simulink are determined by the curvatures K at each point along the path. In the design procedure, we begin by determining and spatially based on the curvature K at a specific point along the path, and it is represented by Equations (6) and (7). In the first step, the path tracking is not possible if the trajectory K > Kmax is not trackable that explains that the value of the linear velocity is not more than two in the following experiment. If there will always be sufficient rotation speed to maintain the linear speed and the expected curvature. Therefore, the maximum linear speed is always used to define and rotational speed is determined by . The second step is automatically assigned the time dimension to and and then transform it into and . This can be achieved by determining a series of the small time-interval, . The small time-interval is determined by the sampling time in Simulink, between and on the path, assuming that the spatial linear and angular velocities variables and are constant within the short time interval.

2.2. Motion Control of the Differential Indoor Mobile Robot

This section presents the real-time experimental results derived in this study. A desktop-based mobile robot control simulation was performed using ROS in MATLAB. We considered the problem of tracking a straight line at a constant velocity. Using the above commands, the user’s high-level computer was connected to a low-level computer using Linux. Step 1: Run the Simulink model for a specified time. Step 2: After the experiment, plot the reference and actual mobile—robot trajectories using a MATLAB plot command. The following procedure completes the state-flow algorithm described in Algorithm 1.

Input: Robot linear velocity: angular velocity: output: motor output reference value: , . Define: The robot maximum linear velocity as , turning radius as radius, mobile robot chassis width as width, maximum output value:
CASE I: (go forward)
. ()
. ()
Else if (Front Left)
radius
radiusL = radius-width/2
radiusR = radius+width/2
CASE II:
If : (stop)
Else if : (Left self-rotation)
Else if : (Left self-rotation)
2.3. ROS Mobile Robot System Architecture

In the Simulink ROS, the position measurements and heading angle were the measurable output variables. Furthermore, the linear and angular velocities were the input variables. The aim was to create a state-flow control law; , and enabled the mobile robot to follow the reference trajectory. The overall process is depicted in Figure 6. Linear and angular velocities were calculated and set in the state-flow based on the constraint theorem described in Section 2.2. Thereafter, the user could plot the robot’s moving direction using the “PLOT” command in the MATLAB prompt. The experimental results are shown in Figure 7. The collected data were saved using the MATLAB prompt. After the experiment, a comparison between the actual and reference trajectories of the state-flow algorithm was successfully performed.

The mobile robot navigation module was created to control all vehicle control activities and operations through the state-flow algorithm. The geometry-msgs/Twist ROS message format was used to create an ROS service that accepts the ROS topic input. The navigation module then processed the mobile robot to receive velocity commands as messages on the command/velocity topic information, which resulted in the desired controlled action of the mobile robot. The complete ROS topic and node structure were been developed according to reference [33]. The nodes were composed of two parts: Linux system and MATLAB/Simulink. Figure 8 shows the Simulink-ROS connection to the Linux system used to control the mobile robot, and the mobile location package Lidar sensor provided an accurate estimate of the robot. The inertial measurement unit (IMU) sensor was replaced with a workspace block in Simulink. The connection between the mobile robot and the interface between each mobile robot component is described in this section. In Simulink block, the workspace was used to extract the Cartesian coordinates of the mobile robot. Till this point, the encoder data could be received by an upper computer in Simulink, and then the robot position, that is, Cartesian coordinates x and y, heading angle h, could be estimated using these data. We performed the experiment using a circular trajectory. Step 1: Type the IP command in the MATLAB prompt using the following structure rosinit (0htt IP: = = 192 : 168 : 0:xxx : 113110; 0 NodeHost 0; 192.168 : 0:xxx:1770), as shown in Figure 8. The first IP address is a Linux notebook, and the second is the IP address of the computer on which the user uses Simulink. The upper and low-level computers were connected through the IP commands in the MATLAB prompt.

2.4. Autonomous Mobile Robot Hardware Systems and Sensors

The specific parameters for the mobile robot frame were a processing platform composed of NVIDIA, Jetson AGX Xavier, Linux Ubuntu 16.04 LTS ROS, and MATLAB 2023a. The ROS node diagram shows connection to the control system. A Lenovo-ThinkPad laptop equipped with a CPU Intel Core i5-6200 was used in the control system, clocked at 2.40 GHz to process the real-time MATLAB Simulink model, and simultaneously communicating through Wi-Fi radio with the Jetson AGX Xavier embedded board and Brushed DC Roboteq HDC 2460 motor controller. The Jetson AGX Xavier computer acted as the host and motion control module to communicate through an RS-232 serial protocol, and the feedback information of both motors was transmitted to the control module through the serial port to complete the state-flow path-following method, the whole connection of the hardware is described in Figure 7. During platform operation, the host computer adopted a Brushed DC Roboteq HDC 2460 motor microcomputer and sent the PWM signals amplified by the motor driver to the drive module. Finally, the right and left encoders sent feedback information from the two motors, as shown in Figure 7. An actual indoor environment was developed for the experimental setup. It had dimensions of 15.5 m × 6.5 m and it showed an overview of the indoor environment consisting of an actual four-floor setup. The main research aim of this study was to track the predetermined path previously set by a designer using the state-flow block diagram in the MATLAB/Simulink. To evaluate the proposed state-flow algorithm for tracking the linear and angular velocities of a mobile robot on a predetermined trajectory, a series of desktop-based mobile platform experiments was conducted using MATLAB/Simulink, and an ROS connection to the mobile platform.

3. Results and Discussion

3.1. Setup State-Flow Algorithm

For the experiment, we must determine the relationship between the linear velocity signal and the actual mobile robot velocity. Table 1 shows the relationship shows that the actual mobile robot velocity is 0.668 m/s by inputting the signal value in the Simulink model. In a greenhouse environment, 0.668 m/s is a sufficient velocity for automatic driving. Seder et al. [20] conducted several indoor mobile robot experiments using the scalar value, and the maximum velocity was 0.6 m/s. The experiments were divided into three parts: stationary obstacle avoidance, and a path-following experiment to prove the left- and right-turn operation of the mobile robot. We also determined the ratio between the linear and angular signal values to maintain the radius of the circle at the three tile lengths. Figure 7(b) shows that the above error dynamics can be determined as the difference between the desired trajectory of the user set and the actual trajectory of the mobile robot. The constraint on the linear velocity can be determined through various experiments, and (Table 1).

Figures 9(a) and 9(b) shows the relationship between linear and angular velocities signals and real in-wheel motor and steering motor’s velocities.

The user sets the time, linear and angular velocities before the experiment. After setting the optimal parameters, the user implements the Simulink model, and then the mobile robot begins to move. The adjusted signal can be sent to the mobile robot’s Linux computer program using ROS to publish a block, and the ROS subscription block can send the x and y positon of the mobile robot, as shown in Figures 10 and 11. Furthermore, the workspace block can send the x and y coordinates of the mobile robot to the MATLAB prompt. Finally, the user can plot the figure after the experiment.

3.2. Trajectory Tracking Test 1 with State-Flow Design: Stationary Obstacle Avoidance

In Case I, only the linear velocity variables are considered as input variables, and thus the corresponding path following is indicated as a straight line. In Case II, only the linear angular variables are negative values, which means that the mobile platform must move backward. These are simplistic scenarios in which the platform has no persistent existing problems such as left and right turns. However, there are general Cases III and IV in which angular variables are present, which guarantee that the mobile platform can avoid stationary obstacles within the Lidar map using the left and right turns of the mobile robot. The overall structure of the obstacle avoidance scheme is shown in Figures 11(a) and 11(b).

We set the following parameters to control the mobile robot’s, state-flow production discribed as Table 2. The time-series angular velocity and linear velocity signals were generated within the state-flow block, based on a time set in the condition. Furthermore, the parameter for obstacle avoidance test 2 is given as Table 3.

The obstacle avoidance block is described as follows: the state-flow block above is built in a sequential structure owing to the change of the initial coordinates problem, which is determined by heuristic experiments from the 4th block to 5th block. The radius of the circle relevant to the right and left turns was determined only by the controlled variable, the linear velocity v, based on the kinematic equation of the bicycle-type model described in Figure 12.

The following Figure 13 is datasets achived after experimenting avoidance of the stationary obstacle. The kinematic equation is as follows:

For Cases II and III, if v = w, the above equation has implications for the following logic:

The above Equation (4) is a circle with radius v.

3.3. Trajectory Tracking Test 1 with State-Flow Design: Stationary Obstacle Avoidance

An actual indoor environment was used in this study. It had a dimension of 15.5 m × 6.5 m; the 3D simultaneous localization and mapping (SLAM) was used for localization and the Lidar measurements. Figure 14(a) shows the indoor environment of 3D SLAM, which represents Lidar mapping and odometry for mobile robot localization; the left image shows the map and trajectory estimated using the SLAM algorithm, and the right image shows the corresponding feature points. The input signals v and w were discretized signals. At a specific sampling time, the signal was captured, and then held until a certain time passed, which implied a zeroth-order holder function in Simulink. The Lidar intensity for mobile robot localization estimation is shown in Figure 14(b). The shape of the input signal is shown in Figure 14(c), which shows the estimation path and ground truth obtained using odometry and Lidar mapping. The blue path depicts the actual path of the robot, and the red path depicts the corresponding ground truth values. The difference between the actual and reference paths is indicated as root-mean-square error (RMSE). Figure 14(d) shows the estimated heading angle of the indoor mobile robot along the same path, that of the mobile robot localization.

3.4. Navigation Result of the Radius of the Circle

The state-flow method was applied to an actual mobile robot in an indoor environment. First, the poles were placed at three tiles lengths from the pillar in the indoor environment. After the poles were set, the map was scanned via 16 channel Velodyne Lidar to calculate the radius of the mobile robot’s left and right turns, and the process is shown in Figures 15(a) and 15(b). The above logic implies that if exists, it is a circle with a radius of linear velocity . The signs of the angular velocity , determines the direction of the circle. If has a sign, the circular direction is clockwise. If has a signs, then the circular direction is counterclockwise, as shown in Figure 15. Using the dynamic window approach (DWA), the signal was disrupted as described [20]. However, the state-flow method produced a constant signal, which implied that smooth trajectory tracking and accurate path-following were implemented, as shown in Figure 16(c). This indicated that the mobile platform continued moving forward or backward in the direction depending on the signs of linear velocity denoted by . To show that a linear and angular velocity signal was set for the radius of the circle, we conducted an experiment with and , and the radius was times larger than the previous one. Theoretically, the radius of the circle was times by increasing the linear velocity signal twice based on Equation (5). To calculate the radius of the circle, we set a pole at point that indicated the distance from the pillar point. After scanning the 2D Lidar map, we calculated the reference path based on the radius of the first and second quarter-circles. Figure 16(d) shows the estimated path and heading angle of the mobile robot when it moved along semicircular paths when it turned right and left at right angles.

3.5. Navigation Result of the Path Following in the Narrow Environment

The reference trajectory was given by , , and . This concerns the kinematic trajectory controller, which is not relevant to the state-flow. In addition to the mathematical review, we conducted an experiment using a digital clock given the condition linear and angular velocities condition of in [ ], in [] in Equation (5). Equations (6) and (7) shows that the mobile robot reference trajectory , can be determined based on the given angular and linear velocities , , and the poles can be set to calculate the radius of the quarter circle, which are related to the left and right turn of the mobile robot (Figure 15). Distance of the radius of the circle in the Figure 16(c) is automatically obtained from the ROS odometry block in the Simulink which matches the real distance in the Lidar map. Before the experiment, we set two poles, indicated as a desired trajectory, which can be scanned as three points using 16 channel Velodyne Lidar. Using two poles, the generated path is described in Figure 17 in the narrow indoor environment. In detail, when the mobile robot moves in a semicircle from one point to another, the radius of the circle in Figure 18(c) can be obtained by calculating the distance between first, last points and center point, previously scanned in the Lidar map, Figure 16(b). This is because the distance between the points of the Lidar map reflects the measured value in the real environment. Furthermore, the actual path signal described in Figures 16(c) and 18(c) can be read without noise, better than other waypoint method described [34].

Kinematics and odometry using MATLAB for the mobile robot, local planner, and SLAM algorithm 3D mapping to understand the environment in which the mobile robot and global planner are located are shown in the Figure 18(a). Figure 18(b) shows the Lidar intensity used to estimate mobile robot localization in different environments. After implementing the Simulink desktop-based mobile robot control simulation, the red line representing the ground truth path was plotted and compared with the blue line representing the reference path in the Figure 18(c), which shows the reference path previously generated based on the coordinates within the Lidar map. The motion equation is given by Equation (1). The tracking error dynamics is expressed in Figures 19(a) and 19(b).

The above control law holds on the condition that the controlled input are bounded to , . The coefficients C1, C2, and C3 are positive constants that can be adjusted in response to the velocity constraints. At first, the linear and angular signal, , , were sent in the form of an open loop to extract the actual path, , , and then the error dynamic, , , from the reference path, , , could be obtained. In the second closed loop experiment, the position controlled and are set in the state-flow block, then we experiment based on the parameter calculated from Equations (6) and (7). The aforementioned process is described in Figure 5.

Table 3 shows the states in which the above state-flow method can achieve the required accuracy in an indoor environment, and summarize the RMSEs of the position error based on MATLAB/Simulink, with semicircle turns in one run for two different turning paths. Figures 16(a) and 16(b) are the surrounding environment during experiment scanned from Velodyne 16 channel Lidar. Figure 19(a) shows that the first position error in the quarter-circle of the rows from this table was less than 5 cm, owing to the high slippage of the rollers in the headland. Based on the obtained results, the state-flow method can be applied to the indoor mobile robot path-following problem. Figure 19(b) shows that the second quarter-circle right-turn problem had a significant RMSE for the end points, because the ROS publish block was calculated from the IMU sensor, in that case, significant drift occurred, as described in [22]. However, in the actual world, we observed that the mobile platform followed the imaginary path within a 5 cm RMSE error. the reference and actual path described in Figure 18(c) is automatically obtained at ROS odometry block in MATLAB/Simulink during experiment. Figure 18(d) is corresponding heading angle during experiment in the indoor environment. The initial point in the Lidar map was found and the initial point of the actual path was moved to the mobile robot’s starting point indicated in the Lidar map. By placing the reference and actual path on the Lidar map, they show how the robot moved in real environment. RMSE during experiment is given in Table 4.

4. Conclusions

In this study, the state-flow method was applied to an actual mobile robot path following an the imaginary path. First, the predetermined path was set, and then the sampling time and, linear and angular velocities were calculated using the linear and angular constraints determined by various experiments. Furthermore, the reference path was created within the Lidar map by setting multiple poles, and the RMSE was then calculated based on the poles and pillar coordinates of the Lidar map, proving that smooth trajectory tracking can be implemented via the state-flow method. State-flow path following has advantages over waypoint path following such as pure pursuit, which implies vibration is damped and smooth following is possible. We demonstrated that the latitude x, longitude y, and heading angle h is smoothly path followed.

Data Availability

The data can be available to the author, hence ask a datasets to [email protected].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

All listed authors must have made a significant scientific contribution to the research in the manuscript approved its claims and agreed to be an authors.

Acknowledgments

This work was partly supported by the MSIT (Ministry of Science and ICT), Korea, under the Grand Information Technology Research Center support program (IITP-2024-2020-0-01462, 50%) supervised by the IITP (Institute for Information Communications Technology Planning Evaluation) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. 2020R1A6A1A12047945, 50%).