Abstract

The existing particle swarm optimization (PSO) algorithm has the disadvantages of application limitations and slow convergence speed when solving the problem of mobile robot path planning. This paper proposes an improved PSO integration scheme based on improved details, which integrates uniform distribution, exponential attenuation inertia weight, cubic spline interpolation function, and learning factor of enhanced control. Compared with other standard functions, our improved PSO (IPSO) can achieve better optimal results with less number of iteration steps than the different four path planning algorithms developed in the existing literature. IPSO makes the optimal path length with less than 20 iteration steps and reduces the path length and simulation time by 2.8% and 1.1 seconds, respectively.

1. Introduction

Path planning is different from motion planning where dynamics must be considered. Its purpose is to find the optimal path of motion in the least amount of time and to model the environment completely [1]. For the path planning problem of mobile agents, several researchers have proposed many algorithms, which can be classified into two categories, i.e., traditional path planning methods and bionic intelligent algorithm-based methods. The former method includes the A algorithm [2], Dijkstra [3], RTT [4], and artificial potential field method [5]. Bionic intelligent algorithms include differential evolution algorithm [6], genetic algorithm [7], ant colony algorithm [8], artificial fish swarm algorithm [9], and PSO [10]. PSO algorithm is widely used in the practical application and theoretical research of mobile agent path planning due to its strong searchability, fast convergence speed, and high efficiency [11]. The PSO is a population-based randomization technique to find the optimal value for the foraging of the birds. PSO has the advantages of fast search speed, memory, few parameters, and simple structure and has easier implementation at the stage of validation. Besides, its shortcomings include global search and local search imbalance, low convergence precision, easy falling into optimal local solution, and poor robustness.

To obtain better optimization results, a PSO optimization technique is proposed in [12] to converge at the global minimum, and a custom algorithm is used to generate the coordinates of the search space. The coordinate values generated by the custom algorithm are passed to the PSO algorithm, which uses these coordinate values to determine the shortest path between two given end positions. Thus, it is not limited to only finding the optimal value. Still, it can improve the speed of the algorithm. However, the direct transfer of its two-point coordinates is easy to fall into the local optimum, and the obtained optimal value is often considered as the global suboptimal value; the literature in [13] proposed a random disturbance method.

The adaptive PSO optimization method introduces a disturbing global update mechanism to the optimal global position, which prevents the algorithm stalling. Besides, a new adaptive strategy is proposed to fine-tune the three control parameters in the algorithm. Moreover, the need for dynamic adjustment and exploration of the three parameters increases the computational complexity of the optimization algorithm as well as low execution efficiency. The work in [14] proposed a modified particle swarm optimization (MPSO) with constraints to jointly obtain a smooth path, but the method does not improve the efficiency of the particle diversity, which makes the particles stagnate and fall into the optimal local solution. The studies in [15] proposed a fusion of chaotic PSO and ant colony algorithm (ACO). The algorithm effectively adjusts the particle swarm optimization parameters. It reduces the number of iterations of the ant colony algorithm, called the chaotic particle swarm algorithm, thereby effectively reduces the search time. However, due to the improved algorithm parameters, its speed and the position updates are proportional, and this can limit the particle global searchability. Moreover, it is a solution that can fall into the optimal local solution.

The hybrid genetic particle swarm optimization algorithm (GA-PSO) is proposed in [16] that established a path planning. The mathematical model of the problem proposed a time-first based particle-first iteration mechanism, which makes the evolution process more directional and accelerates the development of path planning problems. However, the hybrid algorithm has too many parameters that need control. This increases the computational complexity and has reduced the execution efficiency of the algorithm. It signifies that it is difficult to improve the diversity of particles due to the easy way of falling into the optimal local solution.

Therefore, to combine the advantages of the above-mentioned various improved algorithms and improve their defects, this paper proposed a new method to improve the PSO integration scheme based on improved details, which is used to solve the global path planning of mobile robots in the indoor environment. In the proposed algorithm, the navigation point model is selected as the working area model of the mobile robot, and the uniform distribution, exponential attenuation inertia weight, cubic spline interpolation function, and learning factor of enhanced control are introduced to PSO to improve its performance. Finally, the advancement and effectiveness of the standard test function and obstacle environment are verified in the paper. The results show that, compared with other standard functions, IPSO achieves better optimal results and more iteration steps. Compared with the other four path planning algorithms, IPSO reaches the optimal path length with less than 20 iteration steps and reduces the path length and simulation time by 2.8% and 1.1 seconds, respectively.

2. Improved PSO (IPSO)

2.1. Classical PSO

The fundamental core of the PSO method is to share the information through the individuals in the group so that the movement of the whole group can be transformed from disorder to order in the problem of solution space to obtain the optimal solution of the problem. The answer to each optimization problem is the “particle” speed and position update through (1) and (2). The first term of (1) indicates that the next move of the particle is affected by the magnitude and direction of the last flight speed; the second term means that the following action of the particle originates from its own experience; the third term indicates that the next move of the particle originates from the population for the best companion to learn. That is, the next step of the particle is determined by the experience and the best experience of the companion:where is the dth component of the flight velocity vector of the particle i of the kth iteration; indicates the dth element of the position vector of the particle i of the kth iteration; and represent the learning factor, which is used to adjust the learned to the maximum step size; and are two random functions with a range of value from 0 to 1 and are used to increase the randomness searching; and is the inertia weight to adjust the searchability for the solution space. Although the classical PSO is simple to implement and has few adjustment parameters when it is used in the path planning method, it is prone to poor searchability, falls into the optimal local solution, and has reduced particle diversity, low convergence precision, and low accuracy of path planning. Therefore, this paper systematically improves the classical PSO algorithm by combining various improved methods.

2.2. Improvement of PSO Algorithm Integration
2.2.1. Uniform Distribution

We aim to ensure that the PSO algorithm does not lose the randomness of the particles when initializing the population and, at the same time, avoid the problem of excessive concentration of the initial positions of the particles, which is not conducive to global search and postprocessing, especially with the possibility that the search stagnation may occur in the late search. In this paper, continuous uniform random distribution is used to initialize the particles so that the particles are relatively uniformly distributed in the search space to facilitate later search and avoid particles falling into the local optimal solution. The formula for uniform position distribution is shown as follows:where and are the upper limits of the variables, and are the lower limits of the variables, and n is the number of the decision variables.

2.2.2. Exponential Decay Inertia Weight

The change of the inertia weight affects the position of the particle. The larger the value of , the stronger the global searchability and the weaker the local mining ability. Therefore, good results can be obtained when the values of the are dynamic (adjustable) compared to the fixed values. The value of can vary linearly during the PSO search process, or dynamically based on a measure function of PSO performance. By analyzing and summarizing the linear decreasing inertia weight proposed in [17], the improved particle swarm optimization algorithm based on the improved inertia weight proposed in [18], and the enhanced method of fuzzy inertia weight strategy proposed in [19], this paper presents an improvement in the above-mentioned methods. The fixed, predictable, and iterative step adopted in the previous ways is a global transformation that is not conducive to the particle, such as fixed transformation that can narrow the search range of the particle and affect the diversity of the particle. This paper uses the study in [20] to introduce the inertia weight of exponential decay which is more in line with the characteristics of the exponential function. By adopting the changes of the presteps and poststeps to the search area, its speed can be successfully updated at different periods. This eliminates the premature particles and improves the robustness of the algorithm. Henceforth, the global search and local mining ability are maintained, and this can solve the problems mentioned above to some extent. The expression is as follows:where is the minimum weight, is the maximum weight, as well as the current number of iterations, MaxIt is the maximum number of iterations, and it is the current number of iterations.

2.2.3. Improved Learning Factors

The standard PSO expressed in (1) represents the acceleration weight of each particle moving to the optimal local value and the optimal global value. Lower values allow the particles to linger outside the target area, while the higher values are causing the particles to suddenly rush toward or cover the target area. Several methods have been effectively improved by the learning factor, although the learning factor and inertia weight weaken the uniformity of the optimization algorithm to a certain extent. Hence, it is not conducive to the global and local search algorithms.

The work in [21] analyzes the PSO with a compression factor that considered the learning factor as a function of inertia weight. This kind of method has the advantage of transforming the three variables involving inertia weight and learning factor into one variable, which not only facilitates practical application but also enhances the uniformity of the algorithm optimization process. However, the function used is complicated, costly, and time-consuming. Besides, the possibility of particle stagnation in the later search is high, increasing the risk of particles falling into optimal local values. Therefore, according to the characteristics of high self-learning ability in the early search period and high demand for “social learning ability,” this paper adopts the dynamic method to improve the learning factor; its expressions are as shown in (7) and (8). It is shown that this expression is constrained by (9):

The cosine function in (7) is a subtraction function in the interval (0, ), which has a significant value at the beginning of the search, and the value is decreasing in the latter part of the search. The sine function in (8) is an increasing function in the interval (0, ) with a small value before the quests, and its value increases after searching. Not only does it satisfy the condition that the PSO algorithm has better learning ability in the optimization process, but it also changes the inertia weight and the learning factor into one variable, which is convenient for practical application and strengthens the uniformity of the algorithm optimization process. Equation (9) is the guiding condition for selecting and adjusting the PSO parameters given in [22].

2.2.4. Improvement Based on Robot Dynamics Requirements

(1) Cubic Spline Interpolation. The experimental results show that (see Section 4.2 for details) the path planning of the improved PSO has more turning points with a rough path, which will affect the dynamic characteristics of the robot when it is moved. Therefore, it is necessary to improve the aforementioned improved PSO algorithm further to have a smooth path with the improved robot dynamic adaptability of the algorithm requirements. The cubic spline curve is fitted by several interpolation intervals based on cubic polynomials to provide a smooth curve, defined as follows:

On the range [a, b], n + 1 takes a node, and the node coordinates are . If the following conditions are met, it is called a cubic spline function.(i)Within each cell , where i = 0, 1, …, n, there is a cubic polynomial:(ii)The function and its first and second derivatives are continuous at the interpolation point.(iii) commonly uses endpoint conditions that can satisfy the following three requirements:(a)Free boundary: the second derivative at the endpoint is zero.(b)Fixed limitation: the range value of the differential function from the beginning to the end is specified.(c)Nonnode boundary: the third derivative at the 2nd to the last node is continuous.

(2) Particle Coding. Particle coding is the assignment of coordinate positions to several path nodes. The path node is the intersection of any two cubic spline intervals. Path nodes can be selected arbitrarily or according to the environment. Suppose that the coordinates of n path nodes are . The start and end coordinate of the trail are , . Interpolating the coordinates of the interpolation points with cubic spline interpolation is on the interval of . The path of the particle code planning is the connection of the interpolation points. This paper uses the path node, the interpolation point, and the link of the start and endpoints of the path as the running trajectory of the robot.

(3) Evaluation Function. This paper provides the shortest way that does not intersect with the obstacle. The design evaluation function of our proposed algorithm is as shown in (11), where L is the path length of the mobile agent from the ith path point to the i + 1th path point, which is often used as an index in the path planning, and its mathematical expression is given in (12); and are the coordinates of the ith path point; and are the coordinates of the i + 1th path point; is the weight coefficient and is set to 100 here, which is used to exclude the illegal path, i.e., the way through the obstacle; and P is a penalty function of obstacle avoidance constraint in the selected indoor environment model, which is used to set up a safe distance. The calculation formula is as shown in (13), where is the radius of the mth obstacle, m is the number of obstacles, and c and d are the center coordinates of the obstacle; the smaller the value of P, the higher the safety coefficient of the final path. If the trail passes the mth obstacle, it is a number greater than 0. If the obstacle is avoided, then the value is 0:

3. Improved Algorithm Flow

This section provides the detailed step by step explanation of how to improve the PSO algorithm, as stated in Section 2.Step 1: the number of path nodes, together with the number of interpolation, is determined according to the actual environment. Step 2: set the parameters of the particle uniformly and initialize the particle position using (3) and (4), respectively. Then, the particle population and velocity are initialized. Step 3: compute the coordinates of m interpolation points in x and y directions of each particle. Step 4: calculate the fitness value of the particle according to (11). Step 5: update the velocity and position of the particle according to (1), (2), and (5) to (9), respectively. Update the local optimal value Pbest and the global optimal value Gbest. Step 6: determine whether the updated particle intersects the obstacle according to (13), and obtain a path composed of the path node coordinates after updating. The number of iterations is increased by one. Step 7: if the termination condition is met (the threshold error is good enough to be negligible.), then the algorithm ends, and the optimal path is output. Otherwise, it goes to Step 3 and repeats the procedure. The flowchart of the algorithm is shown in Figure 1.

4. Simulation Experiment and Result Analysis

To verify the effectiveness and feasibility of the improved algorithm, the improved algorithm of this paper (denoted as IPSO), classical PSO algorithm (referred to as PSO), literature [23] stochastic inertia weight PSO algorithm (indicated as Rand PSO), literature [24] trigonometric learning factor PSO algorithm (denoted as TFPSO), and research [25] PSO with contraction factor algorithm (recorded as NCFPSO), the optimal values of the typical functions are compared and analyzed through path planning experiment. The simulation experiment environment is carried out on Windows 10, core i7, CPU (2.2 GHZ), memory 8 GB, Matlab (2018a).

4.1. Standard Test Function Optimization

At present, several researchers work on intelligent bionic algorithms to evaluate and compare the performance of algorithms by finding the optimal values for the five typical functions. The Ackley function is generally used to detect the global convergence rate of the algorithm, the Rastrigin function is used to find the optimal global value of the algorithm, and the Griewank function detects the ability of the algorithm to jump out of the optimal local value. The Sphere and Rosenbrock functions are unimodal functions that are used to solve optimal global solutions.

In this paper, the above five functions are used as objects to compare and evaluate the effectiveness of our proposed algorithm. The basic mathematical of the basic functions properties are shown in Table 1.

4.1.1. Standard Test Function and Parameter Setting

This section compares the performance of our proposed algorithm with some of the developed algorithms in the current literature. The maximum number of iterations of each algorithm is set to MaxIt = 1000, the population size is set to Npop = 50, the dimension is set to D = 30, and the lower and upper limit of the dynamic inertia weight are set as and , respectively. The five algorithms were run 20 times, and the optimal values, mean values, standard deviations, and average simulation run times are computed based on the experimental results for each algorithm, respectively. The performance of our proposed (IPSO) algorithm was evaluated using the experimental results of these four aspects and the iterative curve.

4.1.2. Comparative Analysis of Algorithm Simulation Results, Standard Function Test, and Result Analysis

The results for the five standard functions test curves are shown in Figure 2. The test curve for the Ackley function is shown in the subgraph of Figure 2(a). The Ackley function is an n-dimensional function with several local optimal values. Therefore, it is difficult to find the optimal global value, but it can be seen from the test curve in the subgraph of Figure 2(a) that our proposed IPSO algorithm has the fastest convergence and highest precision among all the four algorithms developed in the literature. The subgraph depicted in Figure 2(b) is the iterative curve of the Griewank function. Griewank function is a typical nonlinear multimode stating function with a large search space and has many local advantages. As can be seen from the graph shown in Figure 2(b), our proposed IPSO algorithm can obtain the optimal value better than the other developed algorithms in the existing literature, and its convergence speed is fast. Its optimal value is received at the iteration step of less than 120. This shows that our proposed algorithm has better local optimal value bounce ability. The subgraph shown in Figure 2(c) represents the iterative curve of the Rastrigin function. Rastrigin function, which is similar to Griewank function, is a multipeak function. Its algorithm is easy to fall into the local maximum with an excellent solution. From the subgraph of Figure 2(c), our proposed IPSO algorithm approaches the optimal global value in a shorter iterative step when compared with the other four algorithms existing in the literature. Our proposed algorithm converges at 110 iteration steps, which verifies its ability to jump out from the optimal local value.

The subgraph in Figure 2(d) is the iterative curve of the Rosenbrock function. The global advantage of the Rosenbrock function lies in a smooth and narrow parabolic valley. The function optimization algorithm provides limited information, and it is challenging to distinguish the search direction. This implies that it is challenging to solve optimal global value. Nevertheless, it can be seen from the iteration curve that our proposed IPSO algorithm can find the optimal global value in less than 60 iterations, which is the fastest among the compared four existing algorithms in the literature. This indicates that our proposed algorithm has better global searchability.

The subgraph shown in Figure 2(e) is the iteration curve of the Sphere function. From Figure 2(e), our proposed algorithm has shown a certain advantage in terms of convergence speed and convergence precision that are not obvious. Still, the optimal value obtained by our proposed algorithm is the smallest, because the Sphere function is a unimodal function with a unique global minimum value, which does not make it difficult to find the optimal value. Hence, the degree of discrimination of the algorithm is not obvious.

Table 2 presents the performance comparison for the test result data obtained by the standard functions Ackley, Griewank, Rastrigin, Rosenbrock, Sphere, and our proposed algorithm for the verification. It can be seen from Table 2 that, for the five functions, except for the Ackley function, the best result is the same as the other four comparison algorithms. However, the other four functions are optimized by our proposed algorithm, and the results are improved by at least 45%. After the optimization of the five functions by the proposed algorithm in this paper, the average value is increased by at least 0.1%, and the standard deviation is increased by at least 0.2%. The experimental data shows that the IPSO algorithm is superior to the other four comparison algorithms in terms of optimal value, average value, and standard deviation. However, the average running time is more than that of the classical PSO. This is because the function of the parameters of our proposed algorithm is improved, while the classical PSO improved the parameters by constant values. Moreover, our proposed algorithm will increase the function to some extent. The limitation of the IPSO algorithm in this paper is being time-consuming. Our future studies would further consider the issue of achieving a reduced time. Although the IPSO algorithm of this paper optimizes the Rastrigin and Sphere functions with the slower convergence speed at the beginning, as the optimization process continues, the IPSO algorithm in this paper performs better than the other four algorithms at the middle and later stages.

4.2. Path Planning
4.2.1. Experimental Environment and Parameter Setting

The IPSO algorithm of this paper uses the navigation point model of [26] to construct the experimental environment model and the obstacle expansion treatment. The experimental environment is divided into a simple environment and a complex environment. The number of simple environmental obstacles is set to 5, the number of path nodes is 3, and the number of interpolation points is set to 100. Similarly, the number of complex environmental obstacles is set to 11, the number of path nodes is 5, and the number of interpolation points is set to 100. The boundary is a nonnode boundary.

Simulation parameter selection: the population size and the maximum number of iterations of the five algorithms are consistent: Npop = 150, MaxIt = 100;  = 0.4, and  = 0.9.

4.2.2. Path Planning and Algorithm Performance Analysis

(1) Simple Environment. In a simple environment, the starting point (■) is the map coordinate system of (0, 0), and the endpoint (★) coordinates are (9, 9), as shown in Figure 3(a).

It can be seen from Figure 3(a) that the path of the IPSO algorithm is smoother and the dynamics of the robot motion is improved; the path of the classical PSO planning does not find the optimal solution at the beginning of optimization and encounters stagnation along its path. It should be easy for the algorithm to fall into the optimal local solution. The ability to find the optimal global solution is weak due to poor particle diversity with small disturbance. The TFPSO algorithm starts to search for a short stagnation phenomenon. Finally, the localized optimal solution is obtained by the disturbance of the algorithm, which signifies that the algorithm has a weak ability to jump out from the optimal local solution.

Figure 3(b) shows that our proposed IPSO algorithm has the fastest convergence rate and converges at 13 iteration steps; the RandWPSO algorithm converges at around 20 iterative steps; the TFPSO converges at the 48th iteration; the NCFPSO algorithm converges at the 14th iterative step; the classical PSO converges at the 34th iteration.

Table 3 provides the performance comparison based average path length and the average simulation run time for the five algorithms. As compared with the other four algorithms, the longest path length of our proposed algorithm is reduced by at least 3.3%, the shortest path length is reduced by at least 2.9%, the average path length is reduced by at least 5.2%, and the average simulation time is reduced by 1.2 s.

(2) Complex Environment. To verify the universality of the experiment, two types of experiments were carried out. The first experiment is carried out from the starting point to the endpoint, while the second experiment is conducted from the endpoint to the starting point.

The first type of experiment: the starting point (■) is the map coordinate system (0, 0), and the endpoint (★) coordinates are (9, 9), as shown in Figure 4(a).

It can be seen from Figure 4(a) that the path planning of our proposed algorithm is not only the shortest but the smoothest. The path planning of a complex environment is relatively concentrated. This is because there are many obstacles in the complex environment with fewer paths to choose from. The classical PSO and TFPSO appear to be stagnant at the beginning of the algorithm, and the IPSO algorithm is more prominent. An excellent solution is achieved after the algorithm is optimized for a while. The optimal local solution is jumped out after its disturbance is settled. It shows that two algorithms (i.e., classical PSO and TFPSO) have small disturbances, few particle diversity, and weak ability to jump out from the local optimal solutions. Figure 4(b) presents the results obtained from the complex environment; from the iterative curves shown in Figure 4(b), the IPSO algorithm has the fastest convergence rate and the highest convergence precision with 22 iteration steps. The NCFPSO algorithm has 27 iterative steps. The RandWPSO algorithm has 28 iterations before it starts to converge. TFPSO iterates 39 times before the left and right algorithms start converging; the PSO algorithm starts to converge at around 93 iterative steps. Table 4 compares the path lengths of five algorithms in the first type of complex heterogeneous environment.

Table 4 shows that the shortest path of the IPSO algorithm is reduced by at least 3% compared with the other four algorithms, the average path is reduced by at least 2.1%, and the average simulation running time is reduced by 1.1 s. Furthermore, path planning has higher accuracy.

The second type of experiment: the starting point (■) is the origin of the map with the coordinate system of (9, 9), and the endpoint (★) coordinates are (0, 0), as shown in Figure 5(a).

By considering Table 5, the performance of the path planning of our proposed algorithm in this paper is as follow: the average path length is reduced by at least 5.2%, the shortest path length is reduced by at least 1.5%, the longest path length is reduced by 7.2%, and the average simulation running time is reduced by 1.2 s.

In summary, it is verified that our proposed IPSO algorithm has the shortest path, the highest precision, and least processing time when compared with the four algorithms existing in the literature.

4.2.3. Summary of Path Planning

In the simple environment, the path planning by the five algorithms is relatively scattered. This is because there are fewer obstacles and ample space with many paths available for the algorithm to select. When comparing the first type of experiment with the second type experimental path planning diagram in a complex environment, the five types of experiments in the first type of experiment are more concentrated when the robot starts running, and appear to be more dispersed in the later stages. This is because the first type of experiment is near the starting point. In the presence of obstacles, the algorithms avoid the obstacles according to their optimization ability. At the later stages of the optimization, the obstacles at the endpoint are sparse, thereby making the path more concentrated.

The second type of experiment is the opposite of the first type. The three-path planning experiment is carried out to verify the effectiveness of our proposed IPSO algorithm among the algorithms existing in the literature. The performance comparison for the algorithms is based on the longest path, the shortest path, the average path, and the average simulation running time. In all the comparisons carried out, our proposed IPSO algorithm has performed better than the other four algorithms in the literature. This is because the algorithm proposed in this paper introduces a uniform distribution initialization strategy. Furthermore, the inertia weight and the learning factor interact in the optimization algorithm, which reduces the parameters of the proposed algorithm. Increasing the uniformity of the improved algorithm can better balance the global search of the algorithm. The introduction of the local optimization and exponential decay for inertia weight improves the searchability of our proposed algorithm and increases the disturbance of our algorithm, which subsequently improves the diversity of the particles.

5. Conclusion

In this paper, particle swarm optimization (PSO) and cubic spline interpolation are combined to solve the minimum of five test functions and robot path planning problems. In view of the problems of the classical particle swarm optimization algorithm, such as poor searchability, low convergence accuracy, easy falling into local optimal solution, poor robustness, and poor path smoothness, the classical particle swarm optimization algorithm is improved from the following aspects: (1) The uniform initialization strategy is adopted for the population to improve the later searchability of our IPSO algorithm. This prevents the algorithm from falling into the local optimal solutions, because the random initialization of the particles is not evenly distributed, which is not conducive to the searchability at the later stages. (2) The introduction of the exponential decay for inertia weight makes the particles grow up in the early stage of the search and is beneficial to the global search. The particle step size in the later stages of the search is small in the local development, and the optimization accuracy is high. Experimental results show that the method increases the disturbance and diversity of particles to a certain extent. (3) The use of sine and cosine function can control the independent variable as the inertia weight. The learning factor makes the three variables become one variable; this reduces the parameters of the improved algorithm and thus reduces the complexity of our algorithm. The interaction of inertia weight and learning factor is increased, the unity of algorithm optimization is improved, and the performance of the algorithm is improved to a certain extent. (4) The evaluation function is constructed, and a smooth path is planned with cubic spline interpolation, which improves the accuracy and dynamic characteristics of the path. However, when compared with the classical PSO algorithm, the time advantage of the proposed algorithm is weak, which will be a crucial issue to be further studied in future research. Nevertheless, this does not affect the competitiveness of the improved algorithm. Besides, in the follow-up study, virtual obstacles will be used for the simulation experiment of multirobot path planning, and it will be used in the real environment of an operating system with hardware platform as turnabout and software platform as ROS (robot operation system), to improve the practicability of the algorithm.

Conflicts of Interest

The authors declare that they do not have any conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 51607133); Shaanxi Natural Science Basic Research Project (No. 2019JM567); China Textile Industry Federation Science and Technology Guidance Project (No. 2018094); and College Student Innovation and Entrepreneurship Training Program (No. 201910709019).