Abstract

A new approach of real-time path planning based on belief space is proposed, which solves the problems of modeling the real-time detecting environment and optimizing in local path planning with the fusing factors. Initially, a double-safe-edges free space is defined for describing the sensor detecting characters, so as to transform the complex environment into some free areas, which can help the robots to reach any positions effectively and safely. Then, based on the uncertainty functions and the transferable belief model (TBM), the basic belief assignment (BBA) spaces of each factor are presented and fused in the path optimizing process. So an innovative approach for getting the optimized path has been realized with the fusing the BBA and the decision making by the probability distributing. Simulation results indicate that the new method is beneficial in terms of real-time local path planning.

1. Introduction

Recently, the development and application of autonomous robots are with growing interest in industrial and military fields. As we all know, navigation is one of the key technical problems for autonomous robots, and the most important factor of navigation is map building based on the sensor system, especially when the autonomous robots are working in an entire unknown environment. The environment is reconstructed by merging the information transferred from the sensor system during the motion. To build a practical map, one of the most difficult problems is due to the poor environment information of the sensor system which has inherent wide radiation cone and the phenomenon of multiple reflections. Thus, how to describe these uncertainties and filter out inaccurate and conflicting information and how to construct the environment view are the hot issues.

In these few years, there are about three types of approaches of constructing the environment view that appeared in exoteric literatures. The first type is the occupancy grid mapping method [1], which represents maps with fine-grained grids that model the occupied and free space of the environment. The second type is the geometrical information mapping method [2], which uses some sets of line, angles and polygons to describe the geometry of the environment. The third way is the topological method [3, 4], which models the environment by a series of landmarks that are connected via arcs.

In order to describe the uncertainties, or filter out the conflicting information detected by sensors, the probabilistic algorithms [5] was proposed by a definitive formulation through the Bayesian technique originally. Then a family of algorithms [6] based on fuzzy theory [7] established the uncertainty information model in each cell. In a similar way, another way based on Dempster-Shafer theory described the uncertainty model by using the belief functions. In these years, the neural network technique have been introduced with using the learning ability of the neural cell [8].

There is no doubt that the optimization problem is quite important for autonomous robots path planning. So many evolutionary optimizing techniques like genetic algorithm [911], neural network [12], and ant colony optimization [13] are extensively used in solving the global path planning problems, on condition that the environment has been detected. But these algorithms do not work in a real-time local path planning environment, because, besides the path length, some other factors such as the underwater robot’s self-characters and the influence of the special environment (ocean current, wind speed) also influence the selection of the local target point in real-time local detecting path planning. As far as we know, few researchers consider these factors in solving real-time path planning problems.

In this paper, a novel real-time path planning approach based on the belief space is introduced. As the transferable belief model (TBM), which is popular in these years, can be used to describe a highly flexible model to manage the uncertainty information in the multisensor data fusion problems. In particular, many applications of TBM have been presented in mobile vehicles and other areas [1417].

The rest of the paper is organized as follows. In Section 2, the uncertainty model of the sensor detection is shortly described, and the main idea of the transferable belief model is written in Section 3. In Section 4, the complex environment information is expressed by the double-safe-edges free space, which can simplify the real-time detecting environment information and prepare for the real-time path planning. In Section 5, the belief space is established according to the belief functions of the factors that affect the selection of the local target points, so the optimization local target point can be found at each step. The connection line of these optimization local target points is the optimization path of the task. Section 6 shows the experimental results of the new path planning approach and Section 7 comprises of conclusion.

2. Uncertainty Model of the Sensor Detection

Sonar is far from being an ideal sensor, mainly due to the width of the radiation cone and to the multiple reflections phenomenon. The former does not allow determining the exact angular position of the obstacle on the fixed angle arc of the circle corresponding to the detected distance. The latter needs a more thorough explanation. The sonar waves are reflected in two different ways depending on the surface irregularities. If their sizes are much smaller than the wavelength of the signal, we have a diffused reflection; that is, the incident energy is scattered in all directions; otherwise, the reflection is mainly specula and the beam may either reach the receiver after multiple reflections or even get lost [18].

The uncertainty model has been set up by fuzzy measure approach. A single reading provides the information that one or more obstacles are located somewhere along the arc of circumference of radius . Hence, there is evidence that points located in the proximity of this arc “occupied.” On the other hand, points well inside the circular sector of radius are likely to be “empty.” To model this knowledge, we introduce the two functions [19]:

That describe, respectively, how the degree of certainty of the assertions “empty” and “occupied” varies with for a given range reading . Here, is the distance from the sensor and are two constants corresponding to the maximum values attained by the functions, and is the width of the area considered “proximal” to the arc of radius [20].

Since the intensity of the waves decreases to zero at the borders of the radiation cone, the degree of certainty of each assertion is assumed to be higher for points close to the beam axis. This is realized by defining an angular modulation function [19]:

In order to weaken the confidence of each assertion as the distance from the sensor increases, the parameter plays the role of a “visibility radiuses,” where a smooth transition occurs from certainty to uncertainty. The motivation for introducing this function is twofold. Firstly, as the possibility of multiple reflections increases as the beam makes a loner fly. Besides, narrow passages appear to be obstructed if seen from a large distance, due to the sensor wide radiation angle. By varying the visibility radius according to the characteristics of the environment, it is possible to obtain a more correct detection behavior [20].

3. The Transferable Belief Model (TBM)

TBM is a model for describing quantified beliefs based on belief function. Beliefs can be held at two levels: (1) a “credal” level where beliefs are entertained and quantified by belief functions; (2) a “pignistic” level where beliefs can be used to make decisions and are quantified by probability functions. The relation between the belief function and the probability function when decisions must be made is derived and justified [21].

In TBM, the actual value of the variable whose finite domain is a given set has been considered. A basic belief mass (BBM) denoted by is used to represent the uncertainty about the value of . , which is the basic belief assignment (BBA), is given to . maps , the power set of on , and satisfies [22]:

The mass represents the part of belief that supports that the actual value belongs to and without more specific several useful functions [23].

Belief function is defined as

The value represents the total amount of belief supporting that is in or without supporting that it is in where is the complement of relative to .

Plausibility function is defined as

The value represents the total amount of belief supporting that might be in without supporting that it might be in .

Combination rules: in general Bayesian theorem, the sensor detection is the vector of plausibility for all . The conditional belief can be written by probability function . It is easier to use the likelihood of given , denoted by [24]. So given the likelihood for every and for every , Smets [25] has proved

Decision making function is defined as

In the TBM, when a decision has to be made, a probability functions on must be adopted. is a probability measure.

4. The Simulation of the Process of the Sensor Detection

4.1. The Process of Detecting of the Sensor

We will build a simulation environment about the detection process of the robot sensor for testing the new approach of the real-time path planning process. The robot sensor is an initiative sensor, the angle of the detecting is 180°, and the distance of detecting is , so this paper will make 180 lines which starts from the particle of the robot and the length are and the angle of each line is 1°.

In Figure 1, point is the particle of the sensor, the sector is the detecting area of the sensor, the distance is the max distance of detecting, the lines are the sound wave of the sensor, and the diameter of the sector and the -axis of the robot is vertical. Thus, with this enactment, after each detecting of the environment, the environment information is the 181-distance, information in the sector ; they are the position information of the obstacles.

Figure 2 shows the four-detail process of the simulation of the detecting process of the sensor; in Figure 2(a) is the state of the no obstacle at time ; it gives the particle of the sensor, the 181 lines of distance, and the sector area of the detecting; in Figure 2(b) is obstacles which the sensor needs to detect at time ; in Figure 2(c) is the detection state of the sensor has detected the obstacles in Figure 2(b); it shows that some of these 181 lines have been cut in this state, so the process of detecting has been built; Figure 2(d) shows the result of the detection; the position information of the obstacles can be noted by the set .

4.2. The Transformation of the Detecting Space Coordinates of the Sensor

In the process of the path planning, the position information of the obstacle, and the robot, the information of the whole target point and the local target points must be described at each time, so it needs a uniform reference frame. There are two reference frames in the process of path planning of this paper: the reference frame of the robot movement and the reference frame of the sensor detection, so it needs the transition of the reference frame. In this paper, the reference frame of the robot movement is a vertical coordinate; the start point , the whole target point , and the position of robot can be denoted; the reference frame of the sensor detection is a pole that coordinates the obstacle information and the local target point can be denoted, and the transition of these two coordinates is in Figure 3.

In Figure 3, the origin of the vertical coordinate is , the position of the robot and the whole target point is and , the origin of pole coordinate is , vector is the pole axes, and the angle between the pole axes and the -axis of the vertical coordinate is so the coordinate of the obstacles or the local target point is . The vertical coordinate in the reference frame of the sensor detection is : So the vertical coordinate in the reference frame of the robot movement is the position vector which is :

5. The Procedure of Confirming the Double Safe Edges Free Space

5.1. The Description of the Environment Information in Real-Time

In an uncertainty and dynamic environment, the environment information for path planning is obtained from the sensor on the robot only, so the algorithm should have well real-time ability and it is also the first step of generating the robot’s motion. According to the sensor detecting model, we propose a method for searching the important information from the detecting information in this paper, which is called the double-safe-edges (DSE) information.

5.1.1. Searching for the Sensor Edges

Obviously, the sensor edges can be searched directly from the sensor detecting information, and its distance and direction can be ensured according to the position of the obstacles.

In Figure 4, point is the particle of the robot (the sensor and robot at the same particle) and the self-safe area of the robot is a circle whose radius is the particle of the robot, and the radius is , the range of the angle is , the biggest detection radius is , the obstacles are and , and the safe distance between he robot and obstacle is . Because the robot detection area is a hemicycle in front of the robot, so we use the lines to simulate the detecting process and the lines’ length is , the number is 180, and the angle of them is 1°. So it can find the sensor edges set quickly according to the decision parameter , , . The rule of detecting sensor edges is as follows: if , so the sensor edges are appearance. The rule of detecting the direction of sensor-edges is as follows: suppose the searching direction of the sensor-edges from the left of the robot, if , so the direction of is left, denoted by , if , so the direction of is right, denoted by .

In Figure 5, it is the state curve of at certain time, the sensor edges set is , and the direction set is .

5.1.2. Searching for the Double-Safe-Edges

The edges are based on the sensor as mentioned above. But the robot has its own safe area because of its special shape and kinematics, if it considers the sensor edges only, and the path planning must be failing. So it is necessary to consider the environment information and the robot’s safe area together.

In this paper the definition of the double-safe-edges has consider the environment information and robot's safe area.

Definition 1 (double-safe-edges (DSE)). When the sensor edges been found, the algorithm will search some points which considering the environment information and robot's safe area, searching start from the sensor-edges according to its directions, the tangent lines which from these points to the robot’s safe circle are tangent to the edges of the obstacles at the same time. The robot and obstacle are at the different sides of the tangent line. These points are the set of the double-safe-edges points and these lines are the set of the double-safe-edges.

In Figure 6, point is the particle of the robot, the set of sensor-edges , and the set direction ; the radius of the safe circle of the robot is . Figure 7 shows that the state curve of at certain time, the set of double-safe-edges , and the set of direction can be found according to the definition of the double-safe-edges. This double-safe-edges information is very important to generate the motion commands in this paper.

The success of finding the double-safe-edges means that the environment detected by the sensor in real-time has been analyzed and interpreted efficiently, the environment information has been simplified, the real-time has been increased, and the robot’s safe area and the kinematics have been considered, so it will be efficient in generating the motion commands at the next step.

5.2. The Types of the Double-Safe-Edges

There are three types of the double-safe-edges: S-DSE, M-DSE, and Z-DSE.

(1)  S-DSE. There is only single DSE point after analyzing and interpreting the environment information. In Figure 8, line connects the goal and point ; if is the interaction point of and obstacle, the robot must escape the obstacle. So the robot’s safe moving direction is , and the safe moving distance is according to the DSE point and line , .

(2)  M-DSE. There is more than one DSE point after analyzing and interpreting the environment information. In Figure 9, lines and connect target point and the interaction points and which are on the safe circle; if and are the interaction points of , , and obstacle, the robot must escape the obstacle. So the possible path set is according to the DSE points set and DSE lines set and and are the angle vector and and .

(3)  Z-DSE. There is zero DSE point after analyzing and interpreting the environment information. The first situation is that there is no obstacle around the robot, so the robot can move to the target directly; the second situation is that the part or whole of detection area that has been enveloped by the obstacle. For example, In Figure 10, there is part detection area has been enveloped by the obstacle. In this situation, there is zero DSE, so the robot enters the state of cruising in order to find the DSE. The rule of cruising is that if part of detection area has been enveloped, so the directions set will be found according to the points set and the moving direction will be found according to the heuristic algorithm. So the robot can move according to this rule and find the DSE at the same time until the DSE appear.

5.3. The Description of the Double-Safe-Edges Free Space

After searching the double-safe-edges, the algorithm has transformed the focus from the environment information to some double-safe-edges points, and these points can be used to generate the robot’s motion. We will describe the double-safe-edges free space in this part.

In Figure 11, and are the double-safe-edges points. Lines and are the distances from the particle to the double-safe-edges point, and points and are on the circle of the safe area of the robot, . Lines and are the distances of the tangent lines, , , and , , , and are the possible planning distances of navigation. So the double-safe-edges free space can be defined by the sector area . This area is a free moving space and the robot can select the local target point according to some rule to finish the motion command on time.

6. The Optimization of the Real-Time Local Path Planning Based on the Belief Space

6.1. The Description of the Path Optimization in Real-Time Path Planning

Although the environment information can be detected by the sensor in real-time, the robot did not know the whole environment information; thus, optimizing the whole path in real-time path planning cannot come true. But there are still some important factors to affect the selection of the path in local environment, and we consider the six local planning factors in this paper, the avoidance collision factor between the robot and the obstacles, the kinematics factor , the self-safe area factor , the path length factor , moving obstacle factor , and other factors (ocean current, wind speed, and so on) , and this part will analyse the influence of the , , and in real-time local detection planning.

In Figure 12, the robot’s position is at time , the real-time detection space is , the local target set is , , and the target is . It is the sketch mapping of analyzing the optimization in local detection space . Firstly, it can find the edge area (the broken line area) , , according to the factor , and is the selected area of ; secondly, supposing that the robot's kinematics is the gyration movement, so the reachable area (the undertone area) , according to the factor , and finally, these two factors can makes the local goal selection area smaller. Supporting the other factor can make the robot have the speed , and this speed can make the displacement , . So it can find the reachable area (the real line area) , , at time according to the factors. This area will be smaller when the consideration factors increase, so the analysis treating and fusing these factors is a necessary method to optimize the path in real-time local detection path planning.

6.2. The Original Idea of the Belief Space in Local Target Selection

In Figure 13, it is the selection local target point situation in which the robot must reach the target point according to some selection rules and the local target point set is and the selection rules set is , so it needs to fuse these selection rules in order to find the optimization local target point.

We note that the selection state space is , described by , . For each , the selection state space is known and the influence from to can be defined by the BBA in the selection state space , denoted by , so the BBA set in is . Three gray areas are the area of the BBA set , denoted by , and these are also the descriptions of the obstacle information, kinematics, and the path length factors, so the belief space can be defined, denoted by , the local target points' belief can be defined by the belief functions, and the belief functions can be fused according to the TBM rules. The definition of the fusing is xy(12)

The selection of the local target point must satisfy every belief function distribution at the same time, so some local target points can be deleted and the set has been changed to , and the optimization local target point can be found if the fusing belief distribution is ; thereby the selection of local target point at certain time in belief space has been finished and the aim of optimization came true.

6.3. The Method of Making the Belief Function
6.3.1. The Belief Function Distribution of the Sensor Detection

As the uncertainty model has been described, we further discuss the belief function distribution in Figure 14. The angle of detection is 90°, the direction of is the coordinate axes, and the right is positive. The coordinate of position is , and from (1)–(4), the detection distance is , the angle between and is , the uncertainty area because of the detecting of is defined by the sector and the width approach to the detection distance, and this area is the gray area in Figure 14. The point is in this area and the angle between and is , so the angle between and is , and the detection area uncertainty function is

The detection position uncertainty function is

According to the analysis mentioned above, the detection area uncertainty distribution is from the coordinate axes to the opposition side, and the detection position uncertainty distribution is from sector to the distance area. So the point in the detection area plausibility function is defined as

The point in detection position plausibility function is defined as

So the position “occupy” and “empty” plausibility function can be defined as

Thus, the BBA of the “occupy” and “empty” in TBM can be defined as

This BBA space is defined in the belief space. They are the belief functions distribution according to the base idea of the belief space, denoted by .

6.3.2. The Belief Function Distribution of the Safe Distance to the Obstacle

Figure 15 shows the belief function distribution of the safe distance to the obstacle, and point is the particle of the robot, point is on the edge of the obstacle, the shortest safe distance is , and it is the radius of the self-safe area. In a real environment, when the robot enters into a specified distance (alertness distance) , it needs to calculate the dangerous degree of collision. So the safe distance function can be given as

Then the “safe” and “dangerous” plausibility function can be defined as

Finally, the BBA of the “safe” and “dangerous” can be defined as

All these BBA spaces are defined in the belief space. They are the belief functions distribution according to the base idea of the belief space, denoted by .

So other factors can be defined and described in the belief space, it is the base step of fusing these factors to find the optimization local target point.

6.3.3. The Belief Function Distribution of the Optimization the Path

In Figure 16, we give the distribution of the path belief function at one movement space. Point is the particle of the robot, the detection distance is , and the angle of the free space is ; thus there are two definitions of the selection of the local goal and for optimizing the path.(1)As the global target is in a free space at this time, , , , so the path proportion function in two ways is The direction of the distribution is The direction is : So the path proportion function is (2)As the global target is out of a free space at this time, so the path proportion function in two ways is

The direction of the distribution is ; then

The direction is , so

So the path proportion function is

So the path “optimization” and “nonoptimization” plausibility function can be defined as

As the same way, the BBA of the “optimization” and “nonoptimization” in TBM can be defined as

All these BBA spaces are defined in the belief space. They are the belief functions distribution according to the base idea of the belief space, denoted by .

6.3.4. The Belief Function Distribution of the Dynamics of the Robot

Figure 17 shows the distribution of the dynamics of the robot at one movement space, point is the particle of the robot, and the speed of the robot is ; supposing the movement character is the nonglide movement, so the track of the movement is one part of the circle, the position of the local target point is , the radius of the track is , the position is the max distance that the robot can reach at certain time, and the min movement radius is , so the reached proportion function has two directions.(1)Consider the reached proportion functions in the same track radius: (2)Consider the reached proportion functions in the same detection area:

So the reached proportion function at certain detection time at local target point is

So the path “reach” and “unreach” plausibility function can be defined as

Then the BBA of the “reach” and “unreach” in TBM can be defined as

All these BBA spaces are defined in the belief space. They are the belief functions distribution according to the base idea of the belief space, denoted by .

6.3.5. The Belief Function Distribution of the Escaping the Movement Obstacle

Figure 18 gives the distribution of the path belief function at one movement space, point is the particle of the robot, the detection distance is , the angle of the free space is , the one side speed of the movement obstacle is , and the belief function distribution of the movement obstacle in the free movement space can be defined.

The one side edge point of the obstacle ob is , and this point can reach the position after time , so the double-safe-edges free space will be changed from to , and the position of the robot can reach the position after time . If the position of the robot is after time , the angle is of the and the angle is of the , so the it can describe the belief function distribution of the escaping the movement obstacle.

The collisions function of the robot which moves from the position to the position is

So the path “safe” and “collisions” plausibility function can be defined as

Then, the BBA of the “safe” and “collisions” in TBM can be defined as

All these BBA spaces are defined in the belief space. They are the belief functions distribution according to the base idea of the belief space, denoted by .

6.4. The Model of Fusing the Correlation Factors in Belief Space

Suppose that, at any given time, the local target points set is , the correlation factors set is , and the selection state space is , . For each , the BBA set in the selection state space is , so the belief space is , and this part will combine the BBA in the belief space to optimize the selection of the local target point.

6.4.1. The Structure of the Local Target Point Belief Space

Figure 19 shows three proposition spaces and a decision-making function; supposing that is a multimapping from to , is a multimapping from to ; and compose the “credal” level and the decision-making function composes the “pignistic” level in TBM.

In “credal” level each local target point has its own factors, so it has to filter the fusing local target point belief space to make sure of the whole factors at the same time. Each factor has its own belief space, the whole factors BBA depend on the selection state space of the factors, and this chain structure of the local target point belief space can transform the influence of the factors to the BBA function in the belief space. In the “pignistic” level it denotes the influence degrees of the factors using the probability functions; it is the final form of selecting the local target point.

6.4.2. The Fusing Process of the Belief Space

It needs to fuse the belief space when the factors have been described to the BBA functions in the belief space; the details of the process at certain times are as follows.

Step 1. The local target points set is selected according to the double-safe-edges free space, denoted as , and the correlation influence factors are ascertained at certain time, denoted as .

Step 2. The BBA of the correlation influence factors set can be calculated, denoted as , and the belief space can be made according to the base idea of the belief space in local goal selection.

Step 3. In belief space , the BBA set can be combined according to the elements of the selection state space , so the belief space including in the same state space can be made.

Step 4. The in can be transformed to the probability distribution . For all , so the optimization local target point can be selected according to the .

7. The Simulation of the Local Target Point in Belief Space

In a real-time path planning process, the environment information requires to be detected at each time, so the simulation of the local target point in belief space should satisfy this character. In this paper, the maps of simulation have been made by the  .bmp pictures beforehand, and the algorithm of the local target point in belief space has been written in the software by the program, so in the process of the simulation it shows the start point and the target point, the obstacles on the map, the particle and the self-safe area, the lines of detecting, the position of the robot, and the path.

There are about two different types of simulation that will be shown in this paper to prove the feasibility of the double-safe-edges space and the idea of selecting the local target point in belief space. Simulation I is the simulation of double-safe-edges space in two conditions; it will show the special map which can show the characters of the double-safe-edges space and the death area (U shape) which can show its flexible ability. Simulation II is the simulation of efficiencies of the selecting the local target point in belief space, and it will show the changes of belief in the process of the detecting path planning.

Simulation I. The following two special maps show the simulation results of testing the double-safe-edges space. Figure 20 shows the special environment which has lots of edges and corners. In this environment, the sensor will be able to easily detect the edges points of the obstacles, so it is easy to transform these sensor edges points to the double-safe-edges points, and the double-safe-edges free space can be built each time, so the local target point will be found in real-time, and the robot will move to this position. From the simulation we can see that the method of double-safe-edges space has found the target point successfully and also keeps the path smoothness. Figure 21 shows the special environment which is called death area (U shape). Because of this special environment, when a robot enters this environment, there are zero obstacle edges that can be found, so it is hard to find the right local target point to escape from the obstacle. In the double-safe-edges space, this situation is the Z-DSE type, the robot can move along with one side of the obstacle until it finds the new edges of the obstacle. From the simulation we can see that this method can escape the death area and reach the goal successfully.

So this simulation has proved that the method of double-safe-edges space is a feasible method to describe the real-time detecting environment.

Simulation II. This simulation results are shown in the Figures 22 and 23. Different detection areas will make the decision belief different in belief space. Figure 22 is the situation of special A. In this saturation well the maximum detection distance is shorter than the length of the right-angle line. So in this environment the delectation ability of the sensor is low, and from the simulation we can see that the robot can reach the target point after 115 steps. Figure 23 shows the belief of 7 correlation factors of each point of these 115 points at each step, and we can see that the belief is higher the line of 0.7 belief; it means the selection of each local target point in each step shows the well belief degree.

Figure 24 is the situation of special B. In this saturation the maximum detection distance is longer than the length of the right-angle line. So in this environment the delectation ability of the sensor is high, and from the simulation we can see that the robot can reach the target point after 126 steps. Figure 25 shows the belief 7 correlation factors of each point of these 126 points at each step, and we can see that the belief is higher than the line of 0.7 belief; it means that the selection of each local target point in each step shows the well belief degree. So these simulations have proved that this belief space algorithm has the well effect.

8. Conclusions

As can be seen from literature works that there are a lot of methods for robot path planning, but most of them do not work well in a complex real-time environment. In this paper, we are making some efforts for solving two problems in real-time detecting path planning: one is the expression the environment, and the second is how to optimize the path in local path planning. The double-safe-edges space has been presented to express the environment, and the simulation has proved the feasibility of this approach. Then, the belief space has fused the factors and the uncertainty of detection in real-time detecting path planning successfully, the simulation of the belief space is well running. So these achievements will help the researching of the real-time path planning effectively. Certainly, there are a lot of tough jobs such as the details of the system structure, or how to control the robot accurately. All these considerations should be further extended in our future work.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (nos. 51379049 and 51109045) and the Fundamental Research Funds for the Central Universities of China (HEUCFX41302).