Abstract

Path planning method for unmanned underwater vehicles (UUV) homing and docking in movement disorders environment is proposed in this paper. Firstly, cost function is proposed for path planning. Then, a novel particle swarm optimization (NPSO) is proposed and applied to find the waypoint with minimum value of cost function. Then, a strategy for UUV enters into the mother vessel with a fixed angle being proposed. Finally, the test function is introduced to analyze the performance of NPSO and compare with basic particle swarm optimization (BPSO), inertia weight particle swarm optimization (LWPSO, EPSO), and time-varying acceleration coefficient (TVAC). It has turned out that, for unimodal functions, NPSO performed better searching accuracy and stability than other algorithms, and, for multimodal functions, the performance of NPSO is similar to TVAC. Then, the simulation of UUV path planning is presented, and it showed that, with the strategy proposed in this paper, UUV can dodge obstacles and threats, and search for the efficiency path.

1. Introduction

Unmanned underwater vehicles (UUV) were first designed for military purposes. Comparing it with an ordinary vehicle, it is more suitable for stealthy and scouting missions. Recent advances in technology have driven the development of unmanned vehicles to a new level. UUV are widely applied in civilian applications such as surveying, landscape mapping, and life rescuing [1]. A major challenge in the development of UUV is the realization of a real-time path planning and obstacle avoidance strategy that can effectively guide the vehicle in unstructured environment [2].

In recent years, many researchers have developed various approaches to solve the path planning problems, such as genetic algorithms (GAs), linear programming, potential fields, probabilistic sampling methods like rapidly exploring random trees (RRTs), and artificial intelligence (AI) methods like , which assures the path optimality. Cheng et al. [1] proposed a GA-inspired UUV path planner that is based on dynamic programming (DP). In the proposed path planner, the random-based crossover operator in GA is replaced with a deterministic crossover operator. The proposed path planner can always provide the best combination of crossover points from the available path segments. Li et al. [3] proposed a novel artificial bee colony (ABC) algorithm for unmanned combat aerial vehicles (UCAVs) path planning. Simulation results confirm that the algorithm is more competent for the UCAV path panning scheme than other ABC algorithms. Mansury et al. [4] have proposed a solution to the problem of path planning using artificial bee colony (ABC) algorithm and cubic Ferguson splines. Firstly, a path for robot movement is described by Ferguson splines and then ABC algorithm is used to optimize the parameter of splines to find an optimal path between the starting and the goal points considering obstacles between the starting and the goal points considering obstacles between them. Khelchandra and Jie [5] proposed a path planning method that is based on random sampling. Their proposed method has shown a high probability to find collision-free paths in short time. However, because of its randomized nature, their method may come up with infeasible solutions. Fernndez-Perdomo et al. [6] proposed a novel path planning algorithm for gliders using ocean currents. It bases on the family of algorithms and incorporates a probabilistic framework. Instead of discretizing the search space, a set of bearing angles is sampled at each surfacing point and the glider trajectory is integrated. Like other exact algorithms, the computational complexity of an algorithm increases with the size of the search domain. Li et al. [7] redefined potential functions to eliminate oscillations and local minima problems and use improved wall-following methods for the robots to escape nonreachable target problems. Meanwhile, they develop a regression search method to optimize the planned path. The optimization path is calculated by connecting the sequential points produced by improved artificial potential field (APF) method. The simulation results confirm that the proposed path planning approach can calculate a shorter or more nearly optimal path than the general APF. Jaillet et al. [8] presented a sampling-based algorithm to compute paths in problems which involve high-dimensional cost spaces. The proposed method combines the exploratory strength of the rapidly exploring random tree (RRT) algorithm, with the efficiency of stochastic optimization methods. It integrates an adaptive mechanism that helps to ensure a good performance for a large set of problems. Due to the nature of path planning, it is a constrained optimization problem, and many optimization algorithms have been proposed to overcome the problem. Brand et al. [9] proposed the application of ant colony optimization and compared two different pheromone reinitialization schemes.

Operating in an unknown semistructured environment, UUV may encounter obstacles which are not described in the electronic chart. In order to ensure global and real-time path planning, Yun et al. [10] divided the path planning algorithm into two levels: firstly, employing global path planning based on geometric method and then carrying out local path planning based on one new artificial field potential function, which uses sector as the parameter. In order to ensure real-time path planning, the information of sonar is combined to the static global map. Dolgov et al. [11] described a practical path-planning algorithm for an autonomous vehicle operating in an unknown semistructured environment, where obstacles are detected online by the robot’s sensors. Firstly, they use a variant of search to obtain a kinematically feasible trajectory and then improve the quality of the solution via numeric nonlinear optimization, leading to a local optimum. Further, they extend algorithm to use prior topological knowledge of the environment to guide path planning, leading to faster search and final trajectories better suited to the structure of the environment.

UUV must have homing and docking functions in order to be completely autonomous. Homing is an operation in which UUV return to the vicinity of the launcher after a mission. Path planning and tracking control of the planned path are also included in this operation. Docking is another operation in which UUV are fixed with the launcher exactly when it is close to the launcher. Docking is more difficult than homing since the position of the launcher can be changed easily by ocean current and waves. Hence, much more precise path considered that future position of the launcher is needed [12]. Nowadays, many of the docking methods have been studied. But, most of the developed methods are concerned about towing or underwater structure designed for docking with the docking station [13, 14]. And many articles are concerned about providing accurate measurement of the UUV position and orientation or providing control strategy for UUV docking [1518]. However, plan homing and docking path is the first step for UUV homing and docking. Sujit et al. [19] present a navigation function based approach for docking onto a moving submarine. The motion planning system is based on potential fields but avoids the problem of local minima around no fly zones by using a Koditschek and Rimon navigation function. The authors represented the no fly zones with circles and ellipses; these zones have tangential potential. Thus, when the AUV reaches them, it gets deflected away depending on the direction of the tangential potentials. Using these directional potentials of the no fly zones, the navigation function based controller must guide the AUV towards the dock. But the resulting path is not an optimal rendezvous.

In this paper, we proposed a strategy to plan UUV homing and docking path in movement disorders environment. And the application of the particle swarm optimization (PSO) algorithm is investigated [20, 21]. PSO is considered as one of the modern heuristic algorithms for optimization first proposed by Kennedy and Eberhart in 1995 [22]. The motivation for the development of this method was based on the simulation of simplified animal social behaviors [23]. The PSO algorithm works on the social behavior of particles in the swarm [24]. In PSO, the population dynamics simulates a bird flock’s behavior where social sharing of information takes place and individuals can profit from the discoveries and previous experience of all other companions during the search for food. That is, the global best solution is found by simply adjusting the trajectory of each individual towards its own best location and towards the best particle of the entire swarm at each time step [22, 23, 25]. Owing to its reduction on memory requirement and computational efficiency with convenient implementation, it has gained lots of attention in various optimal control system applications, compared to other evolutionary algorithms  [20, 21, 26]. Several researches were carried out so far to analyze the performance of the PSO with different settings; for example, Shi and Eberhart [27] indicated that the optimal solution can be improved by varying the value of  from 0.9 at the beginning of the search to 0.4 at the end of the search for most problems, and they introduced a method named TVIW with a linearly varying inertia weight over the generations. Guimin et al. [28] introduced exponential inertia weight strategies, which is found to be very effective for TVIW. Ratnaweera et al. [23] propose time-varying acceleration coefficients as a parameter automation strategy for the PSO named TVAC, which reduces the cognitive component and increases the social component by changing the acceleration coefficients.

The contribution of this paper is concluded as follows. Firstly, the avoiding static obstacle cost function, the avoiding dynamic obstacle cost function, and the approaching objective cost function are built to construct the path planning cost function. Secondly, we proposed a new strategy to plan path for UUV homing and docking. Thirdly, we proposed a novel particle swarm optimization algorithm and used it to obtain the optimization cost function for path point.

The rest of this paper is organized as follows. In Section 2, the path planning cost function and a new strategy to plan path for UUV homing and docking are introduced. In Section 3, the basic PSO methodology and previous developments are summarized. In Section 4, a novel particle swarm optimization algorithm is proposed. In Section 5, the experimental settings for the benchmarks and simulation strategies are explained, and the conclusion is drawn based on the comparison analysis. In Section 6, the simulation of UUV path planning is explained. In Section 7, we present some concluding remarks.

2. UUV Path Planning for Homing and Docking

In this section, UUV path planning for mother vessel recovery using PSO in dynamic obstacles environment is presented and the reasonable path is studied. Path planning cost function has three parts: cost function for static obstacles, cost function for dynamic obstacles, and cost function for approaching target. To simplify the issue, UUV sail in the mission zones at certain fixed velocity so that UUV sail at certain fixed distance in each step. As each step for path planning has a fixed length of time, if we obtain the heading angle for UUV sailing to next waypoint, then we can obtain the position of the next waypoint. Set each particle of particle swarms as a heading angle for UUV sailing to next waypoint. Using NPSO, we can obtain the heading angle for UUV sailing to the optimal next waypoint with minimum cost function.

2.1. UUV Path Planning Cost Function

Firstly, define the initial position , the waypoint ,  , and the target .

2.1.1. Cost Function for Static Obstacles

In the underwater environment, UUV have to avoid static obstacles including submarine ridge, reef rock, and hidden pole. In Figure 1, static obstacles avoidance for UUV is illustrated, and the obstacles are represented by circles.

If the position of UUV is , the next candidate waypoint is . As shown in Figure 1, the straight distance is range from to obstacle at the direction along . Cost function for static obstacles can be written as where is an adjusting coefficient, which is constant and given. is velocity of UUV, which is given. The shorter the time from UUV to obstacle is, the larger the value of is, and vice versa. Set the candidate waypoint with minimum value of as the next waypoint.

2.1.2. Cost Function for Dynamic Obstacles

UUV also have to avoid dynamic obstacles existing in the mission zone. In Figure 2, the point is regarded as the cross point of the line and the movement direction for dynamic obstacle. Assume that the time interval of UUV moving from to is and the time interval of dynamic obstacle motion to is . If , it is said that UUV are faster than the obstacle to arrive at . When is large, UUV will safely sail. Choose the avoiding cost function . Besides, when the time between UUV and is longer, more safety is obtained, and vice versa. Therefore, the cost function for dynamic obstacles is adjusted to

If , it is obvious that the dynamic obstacles will arrive at point earlier than UUV, and UUV can avoid the dynamic obstacle safely. Above all, cost function for dynamic obstacles is

Consider the size of UUV and dynamic obstacle, in (3) can be rewritten as where and are adjusting coefficients, which are constant and given, is the ratio of length to velocity of UUV, and is the ratio of length to velocity of dynamic obstacle. If the cross point vanishes, UUV trajectory and the dynamic obstacle trajectory have no intersection, and .

2.1.3. Cost Function for Approaching Target

To implement the motion of UUV towards the target, cost function for approaching target is designed. The schematic diagram of approaching target is shown in Figure 3.

In Figure 3, describes the angle of the connecting line between and . Because and are known, so it is easy to calculate the . is optimization parameter, which represents the heading angle of candidate waypoint . When , UUV move to target directly, and cost function for approaching target is the strongest. So, normal distribution is introduced to represent . Consider

2.1.4. UUV Path Planning Cost Function

UUV path planning cost function is iterated with three cost functions and described as where  is weight coefficients (), which are constant and given. UUV sail from initial waypoint with fixed velocity, and each step for path planning is fixed length of time. Set each particle of particle swarms as a heading angle for UUV sailing to the next waypoint. Using NPSO, we can obtain the heading angle for UUV sailing to the optimal next waypoint with minimum cost function .

2.1.5. The Admittance Angle for Homing and Docking

For homing and docking, UUV will enter into the mother vessel (launcher) with a fixed angle (Figure 4). To obtain the effective path at the end of the planning path, let denote mother vessel hatch center line and a circle is made with the radius which is tangent to one point with the line . The range of tangent point to the hatch is larger than the length of vessel (generally choose the length of two UUV), and is larger than minimum turning radius.

In the path planning for homing and docking, the objective is the tangent point between the circle and the line through the current UUV path point . When (where is the UUV velocity and is the time slot for one step), the path planning is finished and UUV enter into mother vessel at the end of arc.

Influenced by ocean current and waves, the mother vessel is moving, and UUV are moving and has to avoid obstacles, so UUV may sail change between the right side and the left side of the mother vessel. No matter what happened, we just make sure that the circle is in the same side of the mother vessel with UUV.

3. Some Previous Work of PSO

Introduced by Dr. Kennedy and Dr. Eberhart in 1995, PSO has ever since turned out to be a competitor in the field of numerical optimization, and there has been a considerable amount of work done in developing the original version of PSO. In this section, we summarize some entire significant previous developments.

3.1. Basic Particle Swarm Optimization (BPSO)

In PSO, each solution called a “particle” flies in the search space searching the optimal position to land. PSO system combines local search method (through individual experience) with global search methods (through neighboring experience), attempting to balance exploration and exploitation [29]. Each particle has a position vector , a velocity vector , the position with the best fitness encountered by the particle, and the index of the best particle in the swarm. The position vector and the velocity vector of the particle in the -dimensional search space can be represented as and , respectively. The best position of each particle is , and the fitness particle found so far at generation () is . In each generation, each particle is updated by the following two equations:

The parameters and are constants known as acceleration coefficients. and are random values in the range from 0 to 1, and the value of and is not the same for every iteration. Kennedy and Eberhart [22] suggested setting either of the acceleration coefficients at 2, in order to make the mean of both stochastic factors in (7) unity so that particles would over fly only half the time of search. The first equation shows that, in PSO, the search towards the optimum solution is guided by the previous velocity, the cognitive component, and the social component.

Since the introduction of the particle swarm optimization, numerous variations of the algorithm have been developed in the literature. Eberhart and Shi showed that PSO searches for wide areas effectively but tends to lack local search precision. They proposed in that work a solution by introducing , an inertia factor. In this paper, we name it as basic particle swarm optimization (BPSO):

3.2. Time-Varying Inertia Weight (TVIW)

The role of the inertia weight  is considered very important in PSO convergence behavior. The inertia weight is applied to control the impact of the previous history of velocities on the current velocity. Large inertia weight facilitates global exploration, while small one tends to facilitate local exploration. In order to assure the particles converge to the best point in the course of the search, Shi and Eberhart [30] have found that time-varying inertia weight (TVIW) has a significant improvement in the performance of PSO and proposed linear decreasing inertia weight PSO (LWPSO) with a linear decreasing value of . This modification can increase the exploration of the parameter space during the initial search iterations and increase the exploitation of the parameter space during the final steps of the search [31]. The mathematical representation of inertia weight is given as follows: where and are the initial and final values of the inertia weight, respectively, is the current iteration number, and MAXITER is the maximum number of allowable iterations. Shi and Eberhart [27] indicate that the optimal solution can be improved by varying the value of from 0.9 at the beginning of the search to 0.4 at the end of the search for most problems.

Guimin et al. [28] proposed natural exponential (base ) inertia weight strategies, named EPSO and expressed as

3.3. Time-Varying Acceleration Coefficient (TVAC)

In PSO, the particle updated due to the cognitive component and the social component. Therefore, proper control of these two components is very important to find the optimum solution accurately and efficiently. Ratnaweera et al. [23] introduced a time-varying acceleration coefficient (TVAC), which reduces the cognitive component and increases the social component, by changing the acceleration coefficients and with the time evolution. The objective of this development is to enhance the global search in the early part of the optimization and to encourage the particles to converge towards the global optima at the end of the search. The TVAC is represented using the following equations: where and are constants, is the current iteration number, and MAXITR is the maximum number of allowable iterations.

Simulations were carried out with numerical benchmarks to find out the best ranges of values for and . From the results, it was observed that the best solutions were determined when changing from 2.5 to 0.5 and changing from 0.5 to 2.5 over the full search range.

4. Proposed New Developments

In the particle swarm algorithm, the trajectory of each individual in the search space is adjusted by dynamically altering the velocity of each particle, according to its own flying experience and the flying experience of the other particles in the search space [23]. Kennedy and Eberhart [22] indicate that a relatively high value of the cognitive component, compared with the social component, will result in excessive wandering of individuals through the search space. In contrast, a relatively high value of the social component may lead particles to rush prematurely towards a local optimum.

Considering those concerns, a novel and effective approach to PSO algorithms is proposed in this paper. Particles are divided into several groups, and each group contains two particles. One particle in the group is noted as number 1, which is developed according to its own flying experience and the flying experience of swarms, and owns high individual experience acceleration coefficients and low group experience acceleration coefficients. So number 1 is encouraged to wander through the entire search space, without clustering around local optima, while the other particle in the group named number 2 is developed according to the flying experience of swarms and the flying experience of the group (both number 1 and number 2). When the flying experience of the group changed, set the flying experience of the group as position of number 2 and velocity of number 2 which is vanishing. Number 2 is encouraged to converge towards the global optima, with a small cognitive component and a large social component. Consider where , and are random values, uniformly distributed between zero and one, and the their value is not same for every iteration, is the dimension position of the subparticle of the group after time iteration, is the inertia weigh value of the subparticle of the group, is the maximum value of acceleration coefficients, is the minimum value of acceleration coefficients, is the dimension velocity of the subparticle of the group during time iteration, is the dimension position of the optimal position of the subparticle of the groupafter time iteration, is the dimension position of the optimal position of the group after time iteration, and is the dimension position of the swarm optimal position after time iteration.

Remark 1. In this paper, two subparticles are generated for each group; therefore, .

Define   as the number of particles in the swarm,  as the maximum iteration frequency, and as the value of the cost function for the subparticle  of the group after time iteration.

The detailed steps are shown as in Algorithm 1.

Step  1. Initial.
  Substep 1. Set the initial parameters: n, M, , ,
  Substep 2. Random initial
  Substep 3. Random initial
  Substep 4. Calculate and set ,
  Substep 5.
Step  2. If the criteria are satisfied, output the best solution; otherwise, go to Substep 6.
  Substep 6. Update ,
        ;
       if
            ;
       else
            ;
  Substep 7. Update ,
         ;
       if
         ;
       else
         ;
  Substep 8. Calculate
  Substep 9. if
            ;
       if
            ;
       else
            ;
       if
            ;
  Substep 10. Go back to Step   2.

5. Experimental Settings and Simulation for Benchmark Testing

Simulations were carried out to observe the rate of convergence and the quality of the optimum solution of the new methods introduced in this investigation by comparing between BPSO, LWPSO, EPSO, and TVAC. From the standard set of benchmark problems available in the literature, there are 6 important functions considered to test the efficacy of the proposed method. All of the test functions reflect different degrees of complexity.

5.1. Functions Introduction

The functions are as follows.

5.1.1. Sphere Function

Consider With the search space , the global minimum is located at with . It is very simple, convex unimodal function with only one local optimum value.

5.1.2. Rotated Hyperellipsoid Function: Schwefel’s Problem 1.2

Consider With the search space , the global minimum is located at with . It is continuous, convex, and unimodal. With respect to the coordinate axes, this function produces rotated hyperellipsoids.

5.1.3. Rosenbrock Function

Consider With the search space , the global minimum is located at with . It is a unimodal function and the global optimum is inside a long, narrow, parabolic shaped flat valley. Finding the valley is trivial.

5.1.4. Rastrigin Function

Consider With the search space , the global minimum is located at with . It is highly multimodal. However, the locations of the minima are regularly distributed.

5.1.5. Griewank Function

Consider With the search space , the global minimum is located at with . It is a multimodal function and has many widespread local minima. However, the locations of the minima are regularly distributed.

5.1.6. Sum of Different Powers Function

Consider

The sum of different powers is a commonly used unimodal test function. With the search space , the global minimum is located at with .

5.2. The Coefficients Setting

The parameters for simulation are listed in the Table 1.

In this table, expresses the accelerations coefficients, denotes inertia weight, the dimension is , and the range of the search space and the velocity space are and . If the current position is out of the search space, the position of the particle is taken to be the value of the boundary and the velocity is taken to be zero. If the velocity of the particle is outside of the boundary, its value is set to be the boundary value. The maximum number of iterations is set to 1000. For each function, 100 trials were carried out and the average optimal value and the standard deviation are presented. To verify the performance of the algorithm at different dimensions, variable dimension  increases from 10 to 100, and the optimal mean and variance of test function with 6 algorithms are calculated.

5.3. The Results Comparing NPSO with the Previous Developments of PSO

The simulation results are given in Table 2. The comparison results elucidate that the searching accuracy and stability arranged from low to high are BPSO, LWPSO, EPSO, TVAC, and NPSO for unimodal function. For multimodal function, the performance of NPSO is the same as TVAC, but, for the highly multimodal Rastrigin function, TVAC is superior to NPSO.

With the increase of test functions dimension, the searching accuracy and stability for each algorithm are decreased. The performance of NPSO is superior to other algorithms. With regard to high multimodal function, the performance of TVAC is superior to NPSO. From Figure 5, it shows that the order of searching speed from high to low is BPSO (green solid curve), LWPSO (red circle), EPSO (blue triangle), TVAC (black dot curve), and NPSO (blue dash-dot). It is obvious that the performance of NPSO is more effective than the other algorithms. This is because, for the NPSO, two-particle coordination evolution is in the same group. One particle of the group is encouraged to wander through the entire search space, without clustering around local optima. The other particle of the group is flying between group experiences and swarm experiences, which is encouraged to converge towards the global optima.

6. The Simulation of UUV Path Planning

The simulation is designed using the proposed algorithm for UUV path planning in dynamic obstacles environment. UUV start at the initial point, and the candidate point at the next step is searched using NPSO in the range of the turning angle limitation. And the next path point is obtained when the cost function reaches minimum value. Simulation is designed in the map with size of in Figure 6. The initial point is A (50, 50), and dock hatch point is D (900, 900). The maximum turning limitation is , , . To simplify the issue, the velocity is set to a constant 4 m/s, and UUV will encounter dynamic obstacle B which is parallel to the -axis with sway coordinate at 200 m and dynamic obstacle C parallel to the -axis with surge coordinate at 800 m.

In Figure 6, for obstacle B, UUV avoid it by steering left. For obstacle C, UUV avoid it by changing its course towards its starboard side. The simulation results demonstrate that the proposed algorithm is effective to implement the path planning in dynamic obstacle environment.

To compare the performance of NPSO with LWPSO/EPSO/TVAC for path planning, the simulation is designed for path planning without dynamic obstacles. Table 3 shows the parameters that were used for the different algorithms used, and Table 4 shows the simulation results.

From Table 4, NPSO and TVAC present better performance than the other PSO.

To compare the performance of the algorithm proposed in this paper with traditional APF and , complex environment map has been introduced in Figure 7.

As shown in Figure 7, it turns out that proposed algorithm can obtain a better performance than the traditional APF for UUV path planning in complex environment. If the starting point is A (50, 50), and the target is B (950, 950), the traditional APF is easy to fall into local minima and cannot finish the path planning. To further analyze the performance, we set starting point C (50, 950) and target D (950, 50). From Figure 7, traditional APF is easy to shock in front of the obstacle. The algorithm discretizes the search space with a uniform grid, so the possible bearings are discretized too. As a consequence, the time between consecutive surfacing is nonconstant. From Table 5, the algorithm proposed can obtain shorter path than the other algorithms. The proposed algorithm can obtain a more smooth path and is superior to traditional APF and algorithm. Table 6 shows the computation time of different algorithms. It is clear that traditional APF spent shorter time than other algorithms, and NPSO spent the longest time.

7. Conclusion

In this paper, UUV path planning for homing and docking in dynamic obstacle environment using NPSO is present and the appropriate strategy is explored at the mother vessel recovery. The simulation results demonstrate the feasibility of NPSO. UUV can avoid the dynamic obstacles and navigate along the reasonable path to implement the recovery.

The simulation of UUV path planning in complex environment shows that the proposed algorithm can get better path than traditional APF and algorithm, and it is not easy to be trapped in local minima. In order to completely escape from local minima or U shaped obstacles, it is necessary to design some escape rules, such as wall-following, random escaping.

NPSO is introduced and tested through a set of 6 benchmark functions and the statistical analyses of the simulated results are compared with BPSO, LWPSO, EPSO, and TVAC. The test of proposed method with the unimodal benchmark functions indicates its superiority over the other methods. However, for highly multimodal Rastrigin function, TVAC shows better performance than NPSO.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is partially supported by the Natural Science Foundation of China (51179038; 51109043; and 51309067/E091002) and the Foundation of Heilongjiang Province, China (E201123).