Abstract

A multiaxle wheeled robot is difficult to be controlled due to its long body and a large number of axles, especially for obstacle avoidance and steering in narrow space. To solve this problem, a multisteering mode control strategy based on front and rear virtual wheels is proposed, and the driving trajectory prediction of the multiaxle wheeled robot is analyzed. On this basis, an obstacle avoidance control strategy based on trajectory prediction is proposed. By calculating the relationship between the lidar points of the obstacle and the trajectory coverage area, the iterative calculation of the obstacle avoidance scheme for the proposed steering is carried out, and the feasible obstacle avoidance scheme is obtained. The mechanical structure, hardware, and software control system of a five-axle wheeled robot are designed. Finally, to verify the effectiveness of the obstacle avoidance strategy, a Z-shaped obstacle avoidance experiment was carried out. The results confirm the effectiveness of the proposed control strategy.

1. Introduction

A multiaxle wheeled robot has the advantages of many wheels and large carrying capacity, so it has been widely used in the fields of bulk cargo transportation in factories and field rescue. However, at the same time, more wheels and longer body also lead to complex steering control, more steering space occupied, and other problems. Especially now with the development of intelligent robot control technology, it is very essential to carry out research on automatic obstacle avoidance and other aspects in view of the above problems, to realize flexible steering obstacle avoidance control, especially the research on autonomous driving with a low space occupancy rate.

At present, obstacle avoidance technology is a hotspot in the field of intelligent mobile robot. Many studies about autonomous obstacle avoidance of robots are mainly focusing on two aspects: multirobot swarm control [14] and a single robot control [5]. Research on the single robot is further divided into global path planning [6, 7] and local obstacle avoidance path planning [8, 9]. Robots of global path search need to obtain the global distribution of obstacles from the starting point to the destination in advance [10]. However, the robot of local path planning does not know the distribution of obstacles from the starting point to the destination in advance, so it can only detect the situation of obstacles around the robot and conduct autonomous obstacle avoidance while driving [11]. In the local navigation, the robot can decide or control its motion and orientation autonomously using sensors. Many researchers presented various algorithms, such as genetic algorithm [12], neural network [1315], ant colony optimization algorithm [16], fuzzy logic [17], neuro-fuzzy [18], simulated annealing algorithm [19], and particle swarm optimization algorithm [20]. But these studies do not consider obstacle avoidance in narrow areas.

After the robot detects the obstacle, it is essential to select strategies about how to avoid obstacles. Aimed at the complex working environment of the ward inspection robot and the nonstructured and diverse obstacles, Yu et al. [21] contrive obstacle detection and avoidance method based on the laser sensor camera. Xie et al. [22] proposed an improved obstacle avoidance method based on the constraint of obstacle boundary condition by detecting the obstacle boundaries to obtain the reference direction of the robot and avoiding the loss of obstacle details; it eliminates the blindness of finding the feasible direction of the robot and improves the efficiency of finding the reference direction. Chen et al. [23] constructed a supervisory control algorithm based on a barrier function method, which works in a plug-and-play fashion with any lower-level navigation algorithm. Rostami et al. [24] proposed a modified artificial potential field algorithm. By using this method, the robot can pass obstacles around the target without collision and reach the target.

To navigate autonomously, it is important for mobile robots to sense the outer environment efficiently and reliably. Robots can approximate their environment relying on the combination of one or more sensors with proper methods to process the information produced by sensors and then make decisions [2528]. The main sensors to detect the obstacles are infrared radar, ultrasonic radar, vision modules, and laser radar. Because of the small detection range, infrared radars have little use to detect obstacles. Because the ultrasonic sensor can only reflect the distance of the obstacles in a certain area and cannot accurately reflect the specific angle of the obstacles, it cannot be used for accurate obstacle avoidance control of the robot. Laser radars can be divided into 2D- and 3D-laser radars [29, 30]. 2D-laser radar can accurately reflect the distance and angle position of obstacles, and it is relatively cheap, so it is widely used at present. Lidar has the advantages of high precision, large detection range, and fast sweep frequency. It is widely used in the obstacle detection field of the ground mobile robot [31]. Madhavan and Adharsh [32] proposed a method about how to get the data from the lidar sensor and an algorithm is designed to get the information of obstacles by filtering clustering technique. Peng et al. [33] proposed efficient obstacle detection and obstacle avoidance algorithm based on 2D lidar. In this way, it can get the information of obstacles by filtering and clustering the laser-point cloud data. Takahashi et al. [34] developed a new emergency obstacle avoidance module for moving robots that use Light Detection and Ranging (LIDAR) to detect static and moving obstacles. 3D lidar can get the height information of obstacles, so it is suitable for the bumpy environment, such as the sea and field. However, its data is large which costs more processing time, more expensive than 2D lidar [35, 36]. Therefore, we use 2D lidar as the detection sensor in this paper.

When avoiding obstacles, the robot needs to make fast steering decisions according to the distribution of obstacles to meet the fast-moving speed. To accelerate the obstacle avoidance speed and realize real-time obstacle avoidance, Wu et al. [37] proposed a deep reinforcement learning method ANOA (Autonomous Navigation and Obstacle Avoidance) to enhance the intelligence of USVs in the sophisticated mane environment, the ANOA algorithm is proposed for real-time path planning with obstacle avoidance. Borenstein and Koren [38] developed a new real-time obstacle avoidance approach for mobile robots. By permitting the detection of unknown obstacles simultaneously with the steering of the mobile robot to avoid collisions and advancing toward the target, this method can save processing time. Xu et al. [39] proposed a new maximum-speed aware velocity obstacle (MVO) algorithm. It can control a mobile robot to avoid one or multiple high-speed obstacles. Zaheer et al. [40] proposed a new real-time “Free-configuration Eigenspace” (FCE) technique for obstacle avoidance and navigation. The FCE enables an autonomous robot to detect unknown obstacles and avoid collisions while guiding the robot toward the target. Hu et al. [41] introduced an experiential aggregative reinforcement learning method based on multiattribute decision-making. To build a virtual-force field between the obstacles and the robot, Zheng et al. [42] proposed a fast hybrid position/virtual force controller. However, these studies do not consider the problem of fast obstacle avoidance for multiaxle robots.

Other researchers have studied the trajectory curves of obstacle avoidance to get good performance. Akka and Khaber [43] proposed a trajectory tracking control method for a mobile robot when there are static obstacles on the reference trajectory. The tracking control is based on the fuzzy parallel distribution compensation scheme and the linear quadratic controller is used as the state feedback controller of each subsystem. By fully considering the nonholonomic constraints of mobile robot systems, Yuan et al. [44] proposed a new quadrupole potential field method to plan collision-free trajectories. Kuo et al. [45] implemented a kind of obstacle avoidance method focusing on curvature constraint for a nonholonomic mobile robot. It treated the robot as a particle and adopted the curvature constraint streamline and pure tracking streamline change method. However, these studies do not consider the problem of robot trajectory control in multimode steering.

Most of the robots used in the above research have round bodies and a few of them are multiaxle wheeled robots with long bodies. These obstacle avoidance strategies are very suitable when the length and width of the robot are almost equal or the shape of the robot is close to a circle. However, for the multiaxle robot, its body length is much larger than its width. Therefore, the obstacle avoidance control becomes very complex compared with the wheeled robot with a general circular body. The multiaxle robot will occupy more space and have a larger turning radius while turning. Therefore, it is urgent to propose a new obstacle avoidance method combining the characteristics of the long body, to realize obstacle avoidance in a narrow space. Then, it can reflect the mobility and flexibility of the long body robot.

At present, most robots can only turn their front wheels or part of the wheels, which results in low flexibility and requires a large turning space. Therefore, some studies have been carried out from the perspective of wheel steering control, mainly including four-wheel steering [46], multiwheel steering [47], and trailer following control driving [48]. However, these studies do not consider the multimode steering problem of the multiaxle robot. The multiaxle robot has a longer body and more wheels. In this case, if we can control the steering of each wheel individually, especially for different steering modes used in the different distribution of obstacles, it will significantly reduce the difficulty of obstacle avoidance. Moreover, the space occupation will be reduced and the flexibility will be improved. In addition, in the actual obstacle avoidance, the robot’s obstacle avoidance algorithm cannot occupy too much time; otherwise, it will affect the speed of obstacle avoidance.

Therefore, it is necessary to study the fast and autonomous obstacle avoidance in a narrow space in combination with the long body of the multiaxle wheeled robot.

The contribution of this article is to propose a new obstacle avoidance control strategy based on multisteering mode and trajectory prediction. Then, we designed a five-axle wheeled robot with a long body and all wheels can steer independently for the experiment.

The rest of this article is organized as follows: firstly, the mechanical structure and hardware circuit of a lidar robot used in the obstacle avoidance experiment are analyzed in Section 2. It has five axles, and all wheels can steer and drive. Secondly, we propose three strategies in Section 3. They are the multisteering mode strategy based on the front and rear virtual wheels, moving trajectory prediction strategy, and obstacle avoidance strategy based on trajectory prediction. Thirdly, the on-board control system and the upper computer control system of the robot are introduced in Section 4. On this basis, to verify the proposed method, the experiments of obstacle avoidance with Z-shaped obstacle distribution are carried out to verify the effect of the control algorithm in Section 5. Conclusions are given in Section 6.

2. Structure of the Five-Axle Lidar Wheeled Robot

2.1. Whole Structure

The mechanical dimension of the robot is 250 mm × 600 mm. The mechanical overview of the robot is in Figure 1. The robot has 10 wheels, and each wheel has two motors. One is the steering servo motor for driving wheel to steer, and another is the driving motor for driving wheel to roll [49]. The wheel system structure is shown in Figure 2. The steering servo is fixed to the robot body through the steering gear plate. The wheel driving motor is fixed on the steering gear plate. In this way, the steering servo can drive the wheel driving motor and wheel to realize the robot steering through the steering gear plate. Hence, every wheel has two degrees of freedom.

2.2. Hardware Components

The hardware circuit of the wheeled robot is composed of a steering servo, lidar, wheel driving motor, motor driving module, PCA9685, ESP32S, USART GPU serial communication touch screen, and 7.4V3000mAH30C Li Battery, as shown in Figure 3.

The core control unit is ESP32S, which is responsible for transmitting lidar signals to the laptop through WiFi and receiving autonomous obstacle avoidance instructions from the laptop. The ESP32S communicates with the lidar through a serial port to obtain the distance and angle information of the surrounding obstacles. ESP32S and the serial touch screen communicate through the serial communication port to display the information of the surrounding obstacles in real time. We can set the displayed information through the screen. ESP32S communicates with the PCA9685 module through the I2C bus. PCA9685 module controls the steering servo of each wheel in PWM mode to realize the control of wheel steering angle. At the same time, the speed of each wheel driving motor is controlled to realize the control of the wheel driving speed. Because each wheel is driven by two actuators, we can independently control the steering angle, driving speed, and direction of each wheel. Then, we can achieve very flexible multisteering modes. For cost-efficient purposes, this article uses a Delta-2B laser radar. This is a 2D lidar. Its maximum scanning range is 8 meters. Its minimum scanning range is 0.2 meters. The scanning angle is 360°, and the angle resolution is 0.592°. This means that we can get 608 pairs of lidar data points including angles and distance at a time. The scanning frequency is 5∼10 Hz, so the interval time is about 100–200 ms. Therefore, the control unit is needed to calculate the next movement direction of the robot according to these lidar points of obstacles within 100 ms. The Lidar uses a 3.3V-TTL serial port (UART) as the communication interface. To get live data scanned by the lidar, we connect the lidar with ESP32S. The communication between the lidar and ESP32S is UART communication. The data obtained with the ESP32S from the lidar will contain the distance of the obstacle, at what angle the lidar has rotated to scan the point. This data can then be used to employ the algorithms used for obstacle detection and obstacle avoidance. The mounting position of the lidar was on top of the robot body to achieve a proper and unobstructed plane of measurement.

3. Control Strategy

3.1. Multimode Steering Control Strategy Based on Front and Rear Virtual Wheels

When the robot turns to the left or right, to ensure the symmetry of the trajectory and easy control, we propose to set the virtual steering wheels at the midpoint of the first and last axles of the robot. It is shown from Figures 48. In this way, according to Ackerman’s theorem, we can calculate the steering angles of all actual wheels of the robot according to the steering angles of the two virtual wheels: AF and AR. According to the need for obstacle-avoiding steering of the robot, the following steering modes are proposed based on the front and rear virtual wheel steering angles AF and AR and moving speed V.

3.1.1.

This is the adverse-phase steering mode, as shown in Figure 4. At this point, the steering direction of the front and rear virtual wheels is opposite, and the actual steering direction of the front and rear wheels of the robot is also opposite. The robot can have a relatively small turning radius. Space used for steering is also small. Therefore, this mode is suitable for a relatively narrow steering space.

The distance between the turning center point O and the first axle of the robot is projected on the longitudinal centerline of the robot aswhere is the distance between the ith and (i + 1)th axle. n is the number of robot axles, in this paper, n = 5. and are the steering angles of the front and rear virtual wheels on the center of the first axle and last axle, respectively. The unit is degree. When the wheels are steering clockwise, the angle is positive; otherwise, it is negative.

The distance between the turning center point O and the longitudinal center axis of the robot body is

The steering angle of the jth wheel in the ith axle iswhere is the steering angle of the jth wheel in the ith axle. When the wheels are steering clockwise, the angle is positive; otherwise, it is negative. i = 1, 2, …, n, where n is the number of robot axles, and j = 1, 2. 1 is the left wheel and 2 is the right wheel. is the distance between the kth and (k + 1)th axle. is the wheelbase of the left and right wheel.

The distance from the turning center point O to the jth wheel in the ith axle is

The maximum radius is

According to the size of the moving radius, the program carries out the differential speed (the degree to which the speed deviates from the middle point 90), and the speed of the jth wheel in the ith axle iswhere is the speed of the jth wheel in the ith axle and is the robot moving control speed. V is defined aswhere 180 and 0 correspond to the maximum speed forward and the maximum speed backward, respectively, and there is a linear correspondence between the threshold and the speed.

3.1.2.

This is nonsteering first-axle wheel mode, as shown in Figure 5.

The distance between the turning center point O and the longitudinal center axis of the robot body is

The steering angle of the jth wheel in the ith axle is

The distance from the turning center point O to the jth wheel in the ith axle iswhere j = 1,2. At this point, the turning radius of the wheel of the fifth axle is the largest, so the maximum radius is

According to the size of the moving radius, the program carries out the differential speed to get the speed of each wheel as

3.1.3.

This is the nonsteering last-axle wheel mode, as shown in Figure 6.

The distance between the turning center point O and the longitudinal center axis of the robot body is

The steering angle of the jth wheel in the ith axle is

The distance from the turning center point O to the jth wheel in the ith axle iswhere j = 1, 2.

At this point, only the first axle wheel has the largest turning radius, so the maximum radius is

The speed of the jth wheel in the ith axle is

3.1.4.

This is the in situ rotation state, as shown in Figure 7. The turning center is located between the 2nd and 3rd axles.

The steering angle of the jth wheel in the ith axle is

The distance from the turning center point O to the jth wheel in the ith axle is

In this case, only the wheel trajectories of the first and fifth axles will have the maximum radius, so the maximum radius is

The speed of the wheel on the jth side of the ith axle is

3.1.5.

This is the lateral driving state of 90°, as shown in Figure 8. The wheel steering angle of the jth side of the ith axle is

The wheel speed of the jth side of the ith axle is

3.2. Trajectory Prediction Strategy

This module mainly calculates the robot’s next running trajectory according to the steering angle AF, AR of the front and rear virtual wheels, and the robot moving speed V. The specific geometric relationship is shown in Figure 9.

When the robot is steering, according to Ackerman’s theorem, all wheels are controlled to turn around the same steering instantaneous center. Based on this principle, we can calculate the steering moving trajectory circle of each wheel and get the change direction of the trajectory according to the moving direction.

When the robot is running, the trajectories of the wheels of the 2nd, 3rd, and 4th axles are all enveloped between the trajectory of the first and fifth axles. Therefore, only the trajectories of the first and fifth axles are needed to be calculated for predicting the whole robot’s moving trajectory. In this way, the calculation model of the whole multiaxle robot’s driving trajectories is simplified to the model of the two-axle robot, which greatly reduces the calculation work. This is shown in Figure 9.

The trajectory calculation theory of the first three steering control strategies is the same. For example, when the steering angle of the front wheel or rear wheel is zero in the first steering control strategy, the trajectory of the second or third can be obtained respectively. For the fourth and fifth steering strategies, they do not need to calculate the complex trajectory. In the fourth strategy, it only needs to detect whether there are obstacles in the circle trajectory of Figure 7. In the fifth strategy, it only needs to detect whether there are obstacles on the left or right side of the robot to determine the direction of the next step. Therefore, the trajectory line of the most typical first steering mode is only analyzed.

According to equation (3), the steering angles of wheels , , , and can be obtained. The angle unit is rad.

The longitudinal distance between the robot turning center O and the front wheel is (in mm)

Longitudinal distance from center O to the rear wheels is (in mm)

Lateral distance from center O to the left wheel of the 1st axle is (in mm)

Coordinates of the turning center point O arewhere , are the x-coordinate and the y-coordinate of the left wheel of the 1st axle, respectively.

According to equations (4), (24)–(27), the coordinates of the kth point in the body’s corner point trajectory curves corresponding to the jth wheel in the ith axle in Figure 9 can be obtained as follows:where and are the compensation distance between four body’s corner points and the corresponding four wheels in the x-axis and y-axis, respectively. is the iterative step size of the steering angles and .

3.3. Obstacle Avoidance Control Strategy Based on Trajectory Prediction

If there are no lidar points in the area enveloped by the steering track lines of each wheel of the robot as shown in Figure 10, then the robot can pass the current obstacle with this steering scheme, and it is a feasible steering scheme. Therefore, after the trajectory lines corresponding to the steering angles AF and AR are iteratively calculated and the relationship between the lidar points and the trajectory envelope region is calculated, if the lidar points are all outside the envelope region, then the obstacle avoidance steering scheme is feasible at this time. At this time, AF, AR, and V (the speed of the robot) are the obstacle avoidance control scheme for the next running of the robot.

Since the steering angles of other wheels of the robot are all calculated according to the front and rear virtual wheels’ steering angles AF and AR, it only needs to calculate AF, AR, and speed V in the analysis of obstacle avoidance control. The steering angle of the actual wheels can be calculated by the robot on-board control unit ESP32S according to the algorithm of the multimode steering control strategy.

When the robot moves forward, as shown in Figure 9, the obstacle avoidance strategy based on trajectory prediction is shown in Figure 11.

3.3.1. Compare the Gap Width and the Size of the Body Width and Preliminarily Select the Feasible Gap

First of all, from the obstacle lidar points at the front, left, and right sides, the gaps that meet the following driving conditions are found.

If the gap has two lidar points, as shown in Figure 9(a), the gap width between two adjacent lidar points and is compared with the size of the body width . If , it indicates that the robot cannot go between two lidar points, so the gap is discarded. Then, the gap between the next set of adjacent lidar points is compared. Otherwise, it means that the robot may go between the two lidar points, which is the possible gap. Therefore, the program can continue to run Section 3.3.2.

If the gap has only one lidar point, as shown in Figure 9(b), it goes directly to Section 3.3.2.

The flowchart comparing the gap width strategy is shown in Figure 12.

3.3.2. Steering Angle Iteration

The steering angles of the front and rear virtual wheels are iterated in two layers in Figure 13. According to the position of the gap and the mechanical structure of the robot, if the gap is on the right front side, the range of the wheel steering angle is

If the gap is on the left front side, then the range of the wheel steering angle is

According to equations (3), (4), and (24)–(28), we can get the coordinates of the turning center points O, , and .

For the lidar points at both sides of the gap, as shown in Figure 9, the one close to the longitudinal centerline of the robot body is the inside point, and the one far away from the longitudinal centerline of the robot body is the outside point. Then, the distance between the inside point of the gap and the turning center O is (in mm)where and are the x and y coordinates of the inside point of the mth gap, respectively. is the compensation value of the difference between the wheel and the body steering moving trajectory, and the unit is mm.

The calculation of the distance from the outer point of the gap to the turning center O can be divided into two cases:(1)Both points of the gap exist, as shown in Figure 9(a):Distance from the outside point of the gap to the turning center O is (in mm)where and are the x and y coordinates of the outside point of the mth gap, respectively.(2)If the gap has only one point and no lidar point on the other side, as shown in Figure 9(b), we setThe front and rear wheel steering angles and are iteratively changed within the specified range of equation (29) or equation (30), until the gap width is larger than the robot body width, and the inside point of the gap is on the outside of the four track curves and the outside point is within the four track curves, as shown in Figure 9. Then, the following conditions are satisfied:The angles of the front and rear virtual steering wheels meeting equation (34) are the steering angles for the robot through the gap.If there are no front and rear virtual wheel steering angles and that meet equation (34) in the whole iteration process, it means that the robot cannot go through the current gap, and it needs to continue to look for the gap that the robot can go through within the lidar point behind by carrying out Sections 3.3.1 and 3.3.2.

4. Control System

The whole control system of obstacle avoidance robot is divided into on-board control system program and an upper computer program.

4.1. On-Board Control System Program

The program is used to receive the data from the lidar module, calculate the information of obstacles, and display the relevant information through the touch screen. At the same time, the lidar information is sent to the upper computer through WiFi, and the robot obstacle avoidance steering and speed control instructions sent by the upper computer are received through WiFi. These instructions are decomposed into the steering angle and wheel speed of 10 wheels. Robot on-board control system procedures include WiFi initialization subroutine, lidar data reading and WiFi data sending subroutine, touch screen display subroutine, WiFi instruction reading subroutine, and steering and driving speed instruction execution subroutine.

4.1.1. WiFi Initialization Subroutine

It is used to set account, password, IP address, port number, and so on for WiFi communication, to realize wireless data transmission based on WiFi.

4.1.2. Lidar Data Reading and WiFi Data Sending Subroutine

It is used to read the information of obstacles produced by lidar through a serial communication port, get the angle and distance of obstacle points distribution, and transmit the information to the upper computer through WiFi.

4.1.3. On-Board Screen Display Subroutine

It is used to display lidar points data and robot running conditions in real time and set up the robot.

4.1.4. Instruction Reading Subroutine through WiFi

This program is used to read the information of steering and speed of the robot for obstacle avoidance sent by the upper computer, detect the status of WiFi communication, and control the robot to stop in time when the WiFi communication is interrupted.

4.1.5. Steering and Driving Speed Instruction Execution Subroutine

According to the information received about the speed and steering angles of the front and rear virtual wheels the steering angle of the ten wheels is calculated with equations (1)–(23).

4.2. Upper Computer Program

The upper computer program includes WiFi module, lidar data reading and decomposition module, travel path prediction-drawing module, and obstacle avoidance algorithm module.

4.2.1. WiFi Module

It includes WiFi initialization module, WiFi reading module, and WiFi instruction writing module, which is used to complete the wireless communication with the robot on-board control system based on UDP protocol.

4.2.2. Lidar Data Reading and Decomposition Module

According to the communication protocol of the lidar data, the data verification of the lidar points of obstacles and the calculation of the angle and distance of the lidar points are completed.

4.2.3. Travel Path Prediction-Drawing Module

This module mainly calculates the robot’s next running trajectory according to the steering angle AF, AR of the front and rear wheels, and speed V with equation (28).

4.2.4. Obstacle Avoidance Algorithm Module

This module mainly uses equations (29)–(34) to iteratively calculate the moving trajectory corresponding to different steering angles of AF and AR, and then compares the relationship between lidar points and the trajectory envelope area. AF and AR corresponding to the absence of lidar points in the envelope area are selected as feasible obstacle avoidance steering schemes.

5. Experimental Verification

To verify the obstacle avoidance control strategy proposed above, experiments of Z-shaped obstacle distribution were carried out, as shown in Figure 14. All space among obstacles is smaller than the length of the robot body, it meets the layout requirements of the narrow space. During the whole autonomous driving process, the initial speed threshold of the robot is set to 108 (the threshold values of 0, 90, and 180 correspond to the robot’s full speed backward, stop, and full speed forward, respectively; the relation between threshold and speed is linear), and the actual driving speed is about 256 mm/s.

The Z-shaped obstacle avoidance experiments were carried out 4 times in total, all of which were able to pass autonomously without hitting the obstacle. One representative experiment was selected for analysis. The area size is 2160 mm × 1580 mm, and the width of the driving channel is 560∼680 mm. The distribution of obstacle passage and experimental results are shown in Figure 14.

The robot enters from the bottom-left corner of the figure. At the first corner D, the adverse-phase steering control strategy with the front wheel control variable AF turning 21° to the right and the rear wheel control variable AR turning 38° to the left was adopted to minimize the occupancy of the steering space, as shown in Figure 14(d). At the second corner C, the adverse-phase steering control strategy with the front wheel control variable AF turning 25° to the left and the rear wheel control variable AR turning 30° to the right was adopted to realize the flexible steering mode with a small radius, as shown in Figure 14(e). During the whole driving process, the motion trajectory of body points A and B is shown in Figures 14(a) and 14(b), and the control output of steering variables AF and AR of the front and rear virtual wheel and speed V is shown in Figure 14(c). The robot can navigate out of the passage without colliding with any obstacles.

Through experiments, it is proved that the proposed control strategy can successfully achieve multimode autonomous obstacle avoidance in a narrow space of Z-shaped obstacles.

6. Conclusions

To improve the steering flexibility of multiaxle wheeled robot, a multimode steering control strategy based on front and rear virtual wheels is proposed in this paper, and the corresponding steering trajectory is analyzed. On this basis, by analyzing the relationship between the envelope area of the travel trajectory and the lidar points, the obstacle avoidance control strategy based on the travel trajectory prediction is proposed. Then, a five-axle all-wheel steering all-wheel driving wheeled robot was designed, the corresponding hardware and software control system was developed, and the obstacle avoidance experiment of Z-shaped obstacle distribution was carried out. The effectiveness of the proposed control strategy is verified.

In the next step, we will further optimize the obstacle avoidance control strategy and increase the speed of obstacle avoidance.

Data Availability

The data involved in the results of this study are accessible from the corresponding author based on the readers’ request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (51005128), 2019 Teaching Research and Teaching Reform Project of Qingdao University of Technology (no. F2019-056), 2020 Teaching Research and Teaching Reform Project of Qingdao University of Technology (F2020-12), and 2020 Teaching Research and Teaching Reform Project of School of Mechanical and Automotive Engineering (2020-1).