Abstract

A new behavior-based fuzzy control method for mobile robot navigation is presented. It is based on behavioral architecture which can deal with uncertainties in unknown environments and has the ability to accommodate different behaviors. Basic behaviors are controlled by specific fuzzy logic controllers, respectively. The proposed approach qualifies for driving a robot to reach a target while avoiding obstacles in the environment. Simulation and experiments are performed to verify the correctness and feasibility of the proposed method.

1. Introduction

One way to accomplish robot navigation is using behavior arbitration and behavior control [1, 2]. Since the behavior control architecture was proposed in 1986, it has been adopted to solve the robot navigation problem frequently. The behavior control is a special form of decentralized switching control in which each behavior is autonomous and can drive the robot on its own without dependencies on other behaviors. Under the standard behavior paradigm, each behavior triggers its own specific control command according to the behavior. The behavior control architecture can handle the navigation problem in an online manner and it does not require environment model; thus, it has been widely used for mobile robot navigation [35]. Fuzzy logic control (FLC) [6] is the most important method for robot behavior-based control and has been investigated for mobile robot navigation and obstacle avoidance by many researchers, see [714].

In [9], Selekwa et al. presented the design of a preference-based fuzzy behavior system for navigating robotic vehicles using the multivalued logic framework. The proposed method allows the robot to smoothly and effectively navigate through cluttered environments. In [11], a new fuzzy logic algorithm is developed for mobile robot navigation in local environments. A robot perceives its environment through sensors, while the fuzzy logic algorithm performs the main tasks of obstacle avoidance and target seeking. In [13], Wang et al. proposed behavior-based hierarchical fuzzy control method for mobile robot navigation in dynamic environment.

Fuzzy logic controllers can be used for driving mobile robot to perform specific motions with good robustness. However, a fuzzy mobile robot controller should be self-adaptive since it must deal with very different control situations such as following a wall or a corridor, or avoiding an obstacle. Rules must be designed so that they can handle these different situations. A way of solving this problem is to create a fundamental or basic controller which can be incrementally updated and optimized by tuning both labels and rules adequately. Another way of solving this problem is to decompose the complex behavior into several subbehaviors or motions which are controlled by separate schema. These control schemas can be merged by combining the corresponding actions. Within each control schema, it is possible to define the most appropriate semantics for the linguistic variables which are manipulated in the rules, and thus each behavior can be tuned independently to be more effective in its own context. Complex behaviors can be obtained based upon the composition of simpler behaviors.

This paper will address the mobile robot navigation issue by combining the ideas of behavior-based control and fuzzy logic control. The rest of this paper is organized as follows. Section 2 presents the framework of behavior-based robot navigation. Section 3 gives the robot kinematic model. Section 4 presents the detailed structure of the proposed control method which contains three elementary behaviors. Simulation results that demonstrate the performance of the proposed approach are given in Section 5. Experiments are carried out in Section 6 while concluding remarks are given in Section 7 to close this paper.

2. Framework of Behavior-Based Robot Navigation

This section proposes a framework for a behavior-based navigation strategy for a mobile robot in an unknown environment. The general structure of a fuzzy behavior system consists of some independent fuzzy behaviors and a component of command fusion, see Figure 1. The framework totally includes four components, which are preprocessing, goal determination, behavior arbitration, and command fusion. In each robot control cycle, the robot reasoning system provides a set of motor control commands via an inference process . This inference process can be considered as a relationship between the input and the output . The input is represented by a multidimensional vector corresponding to a particular set of input data, for example, distance to obstacle, direction to the goal. Similarly, the output corresponds to motor speed and steering angle, and so on. Thus, the relation between and can be expressed by

2.1. Preprocessing

Input data from the robot sensors is performed with a simple preprocessing to reduce noises. Specifically, the sonar and infrared sensor measurements are preprocessed. Meanwhile, in the preprocessing module, if the input is too large, the computational complexity also will be reduced. The dimensions of input can be reduced by introducing a limited number of intermediate variables. These variables classify different perceptual situations which are relevant to the robot’s current behavior and status. Some intermediate variables are statements such as front-obstacle distance, right-obstacle distance, or left-obstacle distance. These variables are also used for afterwards behavior design.

2.2. Goal Determination

In this investigation, we classify the goal determination to two types which are a determined goal and a undetermined goal. When the goal is determined, its exact coordinates are given. The robot must move to a given target while autonomously avoids the blocking obstacles on the robot path. When the goal is undetermined, it does not have the exact coordinates of the goal.

2.3. Behaviors and Fuzzy Logic Controllers

In general, in our case there are two types of robot behaviors, which are goal seeking (GS) and obstacle avoidance (OA). In order to realize the GS behavior, a controller is designed to generate motor commands to reach the goal as soon as possible. The OA behavior is actually sensor-based behavior, which implements a control strategy based on external sensing.

As we designed, each behavior will be controlled by a specific fuzzy controller. In those controllers, reasoning is contained in the rules operating on linguistic inputs and outputs, such as where is the input linguistic variable taking linguistic value . Each linguistic value is defined by a membership function . The is the output linguistic variable taking linguistic value . Each linguistic value of output is defined by a membership function .

Given two linguistic values and defined on the same universe of discourse, the AND and OR operations are defined by (3) and (4), respectively, which are

Figure 2 shows the principle of a basic fuzzy logic controller. In this investigation, we use a Takagi-Sugeno-Kang model as the fuzzy inference engine and the Centroid method for defuzzification.

The fuzzy inference engine is defined as in Box 1.

2.4. Behavior Arbitration (Coordination)

Behavior arbitration can be taken as two conceptually different problems. The first one is the behaviors of OA and GS. In our case, they are activated during the whole motion process. The second problem is combining the results from different behaviors into one command sent to the robot’s actuators. These problems are expressed in Figure 3.

One strategy for behavior arbitration is context-dependent blending (CDB) in which fuzzy logic is applied so that a decision between behaviors can be made in a prevailing situation. Our behavior arbitration strategy is similar to the CDB method. It uses fuzzy context rules to express the behavior arbitration strategy. When the obstacle is close, both OA and GS behaviors are activated and each behavior is assigned a weighting factor. These factors are adjusted dynamically according to the fuzzy weight rules. The weighting factors determine the degree of influence of each behavior on the final motion command. The weight rules continuously update the behavior weighting factors during robot motions.

3. Kinematic Model of Mobile Robot

In this investigation, a differentially driven mobile robot is used, see its kinematic illustration in Figure 4.

The robot has two drive wheels mounted on the same axis. Kinematic equations of this two-wheeled mobile robot are where and are coordinates of the mass center of the mobile robot, is the angle that represents the current orientation of the robot, and are linear and angular velocities of the robot, and are angular velocities of right and left wheels, is the wheels radius, and is the distance between the two wheels centers.

Combining (5) and (6) yields

Equation (7) is the kinematic model of the used robot for both simulation and experiments. Right wheel angular velocity and left wheel angular velocity are used as motion commands to the motors for realizing different behaviors.

4. Behavior-Based Fuzzy Control for Mobile Robot Navigation

We decompose the task of robot navigation into three elementary behaviors: goal seeking (GS), obstacle avoidance (OA), and behavior fusion (BF).

4.1. Global Goal-Seeking Behavior

The GS behavior discussed here is a kind of global behavior which does not rely on external sensed data but seeks for the global and exact goal. The inputs of the fuzzy logic controller include distance from the robot to the goal (EP) and the angular between robot orientation and goal orientation (EA). They are shown in Figure 4. The expressions of these two inputs are where is the robot position and orientation and is the position and orientation of the goal.

The position deviation is represented by a five-variable linguistic fuzzy set {ZE, S, M, B, VB} with its entries for distance of zero, small, medium, big, and very big, respectively. Their corresponding membership functions are shown in Figure 5(a). Similarly, the angular error is represented by {NB, NM, NS, ZE, PS, PM, PB} with linguistic members to indicate negative big, negative medium, negative small, zero, positive small, positive medium, and positive big, respectively, see their membership functions in Figure 5(b). The positive and negative imply that the robot turns to the left and right, respectively.

The motion control variables of the mobile robot are the angular velocities of the right and left wheels. Similarly, the two velocities are represented by a seven linguistic variables’ fuzzy set {NB, NM, NS, ZE, PS, PM, PB} with their membership functions shown in Figure 5(c).

The rule base of the GS behavior is summarized in Table 1. For instance, the (1, 1) entry in Table 1 can be written as

We can use this behavior-based fuzzy controller alone in an environment without obstacles. However, usually the environment contains obstacles.

4.2. The Obstacle Avoidance Behavior

Obstacle avoidance is actually a sensor-based behavior which implements a control strategy based on external sensing. We reduce the dimensions of inputs by grouping the robot’s sonar reading into three options which are left, front, and right. For example, our robot has 24 ultrasonic sonars that produce a set of obstacle distances by the following equations: This is due to the layout of the ultrasonic sensors. The obstacle distance of each option is represented by three-member fuzzy set {VERYNEAR, NEAR, FAR} with their membership functions shown in Figure 6(a) while behavior weight member functions are shown in Figure 6(b).

The velocity variables and (here for obstacle avoidance) are represented by fuzzy set {NF, NM, PM, PF} which means negative fast, negative medium, positive medium, and positive fast. The velocity rules of left and right wheels for the OA behavior are summarized in Figures 7 and 8, respectively. The rules exhibit a behavior characteristic that, if the obstacle distance in any situation is very near, the robot should turn away to find a safer direction.

For instance, the two (3, 3) elements of the top layers in Figures 7 and 8 have the rules as respectively.

When the three parts of the robot body corresponding to (10) have similar obstacle distances as shown in the (1, 1) element of the two top layers in Figures 7 and 8, the robot has to escape from its current embarrassed situation, so is NF and is PF which let the robot make a large left turn angle and decrease its speed since

4.3. Behavior Fusion

The behavior fusion is based on the weight assigned for each of the behaviors. The weight of OA behavior is represented by three-member linguistic fuzzy set {SMALL, MEDIUM, LARGE} with the membership functions shown in Figure 6(b) and rules in Figure 9. At first, we use a behavior arbitration module to calculate the defuzzified weight factors of all behaviors and then carry out command fusion by using these weight factors via where and are angular velocities of right and left wheels as motion commands, while and are angular velocities of right and left wheels preference values suggested by each specific behavior.

Here, is the defuzzified weight factor. The implementation of the behavior fusion is depicted in Figure 10. As shown in the figure, the motion control variables (angular velocities of right and left wheels) are inferred by the GS and OA behaviors. And they are weighted by and , respectively. The results can be given by

For instance, the rule of (3, 3) element of the top layer in Figure 9 can be written as

5. Simulation

In order to verify the proposed method, simulation has been carried out within MATLAB. We designed the different fuzzy logic controllers based on different behaviors via fuzzy inference system (FIS).

5.1. Simulation in the Environment without Obstacles

In this case, the robot is supposed to move from the start point to the goal . Note that the initial orientation of the robot is and there are no obstacles in the environment.

As illustrated in Figure 11, the robot reaches the goal. The trajectory is indicated by the chain of red circles. The robot’s and directions separated trajectories are shown in Figure 12.

The weight of each behavior represents its extent of influence on the final motion commands. In this simulation, the proposed control system uses the GS behavior, because there is no any obstacle so the GS behavior weight is 1 and OA weight is 0.

5.2. Simulation in the Environment with Obstacles

The robot now is moving in the environment with obstacles. The initial position is also and the final position is .

When the robot is close to obstacles, it must decrease its speed and turn for safety. So, the weight of OA behavior is high when the robot is close to the obstacles. Otherwise, the weight is low when the obstacles are far away. The robot is set at a particular start point and the goal is defined in Cartesian coordinates with respect to the start position. The simulation results are shown in Figure 13. It demonstrates that the fuzzy behavior controller has a good performance. From Figure 14, we also can see that the robot reaches the goal after about 3.2 seconds.

5.3. Simulation in a Clustered Environment

Figure 15 shows the results in a more complex situation (a clustered environment). It illustrates the method’s ability to navigate the robot in very small gaps and even turning away from the goal when it is necessary to avoid obstacles. Here, the initial position is set at and the final position is set at .

The robot reaches the goal after about 5.5 seconds as shown in Figure 16. Since there are many obstacles, the robot takes much time to reach this goal compared to the simulations in Sections 5.1 and 5.2.

6. Experiment

Our used real robot is a mobile robot endowed with a ring of 24 sonars, see the real robot in Figure 17. This robot platform is configured with two drive wheels and two swivel casters for balance. Each wheel is driven independently by a motor with 10 : 1 gear ratio which enables the robot to drive at a maximum speed of 3 m/s and climb a 15% grade [15]. The wheel diameter is 217 mm and the mobile robot base width is  mm.

6.1. Experiment for Robot Roaming

This experiment allows the robot to wander within a small area provided with static obstacles, for example, desks and walls, and dynamic obstacles, for example, moving humans. The weight of OA behavior is very high and the GS behavior is very low. The experiment is stopped after a given time or distance of wandering. Figure 18 shows the robot path in the laboratory environment. The trajectory is indicated by the chain of blue circles. The program draws the circle once every 0.02 second. A concentration of circles indicates that the robot is moving more slowly at that moment. Figures 19 and 20 show the variations of robot speed and turn angle during the wandering.

6.2. Experiment of Robot Avoiding Obstacles and Reaching a Goal

Now the robot’s task is using the proposed navigation method to avoid obstacles and reach a goal. For this experiment, the robot is set at the start point and the goal is defined in Cartesian coordinates with the position of . The obstacles configurations were mapped and the localization data of the robot are recorded and plotted in Figure 21. We can see clearly that the robot reaches the goal and avoids the obstacles.

Figures 22 and 23 show the variations of robot’s speed and turn angle, respectively. The positive degree implies turning left while negative indicates right. The robot reaches the goal in almost 20 seconds.

The results presented in the previous sections show the performance of the proposed behavior-based control (navigation) method. Mobile robot usually has uncertain and incomplete information about the environment, so fuzzy rules provide an attractive means for mapping sensor data to appropriate control actions.

The main difference between our approach and the existing ones is that they have different reasoning and different motion commands, while in our approach the final motor command is generated by fusing different behavior-based fuzzy logic controllers into a uniform representation.

7. Conclusions

This paper presented a new behavior-based fuzzy control method for mobile robot navigation. This method takes angular velocities of driving wheels as outputs of different behaviors. Fuzzy logic is used to implement the specific behaviors. In order to reduce the number of input variables, we introduced a limited number of intermediate variables to guarantee the consistency and completeness of the fuzzy rule bases.

To verify the correctness and effectiveness of the proposed approach, simulation and experiments were performed. A Voyager II robot equipped with a ring of ultrasonic sensors was involved in the real experiments. The promising results demonstrated that our method is feasible and reasonable for navigating mobile robot. This method could be extended for any behavioral system by introducing more flexible behavior arbitration strategies.

Acknowledgments

This work is supported by National Natural Science Foundation of China under Grant no. 61075113, the Excellent Youth Foundation of Heilongjiang Province of China under Grant no. JC2012012, and the Fundamental Research Funds for the Central Universities, under Grant no. HEUCFZ1209.