Abstract

The stabilizing problem of stochastic nonholonomic mobile robots with uncertain parameters is addressed in this paper. The nonholonomic mobile robots with kinematic unknown parameters are extended to the stochastic case. Based on backstepping technique, adaptive state-feedback stabilizing controllers are designed for nonholonomic mobile robots with kinematic unknown parameters whose linear velocity and angular velocity are subject to some stochastic disturbances simultaneously. A switching control strategy for the original system is presented. The proposed controllers that guarantee the states of closed-loop system are asymptotically stabilized at the zero equilibrium point in probability.

1. Introduction

In the past decades, the control of nonholonomic systems has been widely pursued. By the results of Brockett [1], the nonholonomic system cannot be stabilized at a single equilibrium point by any static smooth pure state-feedback controller. To solve this problem, lots of novel approaches have been considered: discontinuous feedback control [24], smooth time-varying feedback controller [5], and the method of LMI [6]. The control of nonholonomic mobile robots plays an important role in that of nonholonomic systems because they are a benchmark for these systems. There is much attention devoted to the control of nonholonomic mobile robots. The nonholonomic mobile robots were classified into four types, which were characterized by generic structures of the model equations [7]. Based on the backstepping technique, the control for nonholonomic mobile robots was discussed: tracking problems [8] and stabilizing problems [9, 10]. Hespanha et al. introduced the mobile robot with parametric uncertainties [11], which were further discussed [12, 13]. But all the above articles discussed the nonholonomic systems in the deterministic case, which was not considered a stochastic disturbance.

In recent years, stochastic nonlinear systems have received much attention [14, 15], especially for stochastic control when backstepping designs were firstly introduced [16, 17]. For stochastic nonholonomic systems, there were a few papers. The almost global adaptive asymptotical controllers of stochastic nonholonomic chained form systems were discussed by using discontinuous control [18]. The adaptive stabilization problem of stochastic nonholonomic systems with nonlinear drifts was considered [1921]. By using state-scaling method, backstepping controllers were proposed to deal with exponential stabilization for nonholonomic mobile robots with stochastic disturbance [22, 23]. But the above two papers did not consider unknown parameters. To our knowledge, the problem of adaptive state-feedback stabilization for nonholonomic mobile robots with kinematic unknown parameters, whose linear velocity and angular velocity are subject to some stochastic disturbances simultaneously, has not been reported. So, there exists a natural problem which is how to extend the models in [1113] to the stochastic case and design an adaptive state-feedback stabilizing controller for stochastic nonholonomic mobile robots with uncertain parameters.

The purpose of this paper is to design adaptive state-feedback stabilizing controllers for stochastic nonholonomic mobile robots with unknown parameters. The main idea of this paper is highlighted as follows.(i)We extend the models of nonholonomic mobile robots with unknown parameters in [1113] to the stochastic case. The stabilizing controllers are designed for stochastic nonholonomic mobile robots with unknown parameters by adaptive state-feedback backstepping technique.(ii)A switching control strategy for the original system is presented. It guarantees that the states of closed-loop system are asymptotically stabilized at the zero equilibrium point in probability.

The paper is organized as follows. Section 1 begins with the mathematical preliminaries. In Section 2, the adaptive state-feedback backstepping controller is designed. In Section 3, a switching control strategy for the original system is discussed. Finally, a simulation example is given to show the effectiveness of the controller in Section 4.

2. Preliminaries and Problem Formulation

2.1. Preliminaries

Consider the following stochastic nonlinear system: where is the state, the Borel measurable functions and are locally Lipschitz in , and is an -dimensional independent standard Wiener process defined on the complete probability space .

The following definitions and lemmas will be used in the paper.

Definition 1 (see [16]). For any given , associated with stochastic system (1), the differential operator is defined as follows:

Definition 2 (see [24]). The equilibrium of system (1) is(i)globally stable in probability if for , there exists a class function such that (ii)globally asymptotically stable in probability if it is globally stable in probability and

Definition 3 (see [25]). A stochastic process is said to be bounded in probability if the random variable is bounded in probability uniformly in ; that is,

Lemma 4 (see [24]). Considering the stochastic system (1), if there exist a function , class functions and , constants , , and a nonnegative function such that then(i)for (1), there exists an almost surely unique solution on for each ;(ii)when , , , and is continuous, then the equilibrium is globally stable in probability and

Lemma 5 (see [26]). Let and be real variables. Then, for any positive integers , and any real number , the following inequality holds:

2.2. Problem Formulation

Hespanha et al. introduced the mobile robot with parametric uncertainties [11], which were further discussed in [12, 13] as follows: where and are two control inputs to denote the forward velocity and angular velocity, respectively.

Here we assume that the forward velocity and the angular velocity are subject to some stochastic disturbances. Based on the similar methods in [27, Page 1-2], velocity and the angular velocity with stochastic disturbances can be expressed as follows: where is the derivative of a Brownian motion .

Remark 6. The second equality of (10) is the same as that of Remark 2 in [19]. Moreover, (10) means that can be divided into two parts, with the second parts being stochastic disturbances and the same for  .

Substituting (10) into (9), the system (9) can be transformed into where is unknown parameter taking values in a known interval with ; is unknown positive parameter.

For system (11), we introduce the following state and input transformation: and it is easy to see that

Remark 7. The main difference between this paper and [22] is that the unknown parameter exists in this paper. The controller design of systems (13a) and (13b) will be more difficult.

Remark 8. For system (13a) and (13b), the variable appears in the term in the first equation of (13b); this is different from the traditional stochastic backstepping technique in [16, 17, 24].

3. Adaptive Controller Design

In this section, we will design state-feedback controllers such that all the signals in closed-loop system are regulated to the origin in probability. The following assumptions are needed.

Assumption 9. For the smooth function , there exists a known positive constant , such that

Assumption 10. For smooth function and any positive constant , there exists a known nonnegative constant , such that

Remark 11. For the adaptive controllers’ design in the following, if we let , this assumption will change to , where is defined in (25) and is the same as that in (28) in the following Section 3.2.

Firstly, we will consider the problem of stabilization for systems (13a) and (13b) under the condition of . The case of will be discussed in Section 3.

3.1. The First State Stabilization

Let us consider the subsystem (13a) of stochastic nonholonomic nonlinear systems (13a) and (13b):

In order to guarantee that converges to zero, one can take as follows: where is a design parameter.

If we employ a Lyapunov function of the form: From (13a), (17), (18), and Assumption 9, one can obtain

Theorem 12. If Assumption 9 holds, one can choose positive constants , , and   and the controller as (17), respectively, then(i)the closed-loop subsystem composed by (13a) and (17) has an almost surely unique solution on for ;(ii)the equilibrium of the closed-loop subsystem composed by (13a) and (17) is globally asymptotically stable in probability.

Proof. Choosing Lyapunov function as (18), by (19), , and Lemma 4, (i) holds and the equilibrium of the closed-loop subsystem which contained (13a) and (17) is globally stable in probability and for , . From Definition 2, (ii) holds.

Remark 13. From Theorem 12, one has the state bounded in probability; that is, there exists a positive constant , such that
Substituting (17) into the subsystem (13a), one gets

Proposition 14. For initial state , the solution of (21) will never reach the zero, which avoids the uncontrollability of the subsystem (13b).

Proof. From Lemma 2.3 ([27, Page 93]), the following equality will be the solution of (21): From the above expression of and , it is easy to see that will never cross the origin at the time interval .

In the following Section 3.2, the other states will be regulated to the origin in probability by the design of the control input .

3.2. Other States Stabilization

In order to design a smooth adaptive state-feedback controller, the following state-input scaling discontinuous transformation is needed:

Remark 15. For the initial state , from Proposition 14, one can obtain that transformation (23) is meaningful.

Under the new -coordinate, the subsystem (13b) is transformed into

To invoke the backstepping method, the error variables and are given by

Step 1. Define the first Lyapunov candidate function:
By (24)–(26) and Definition 1, one has
The virtual control can be chosen as where is a positive constant, which will be chosen later. From (27), Lemma 5, and simple operation, we have the following inequalities: where is a design parameter. Substituting these above inequalities into (27), it is easy to see that where is a design parameter and . If we select parameters and to satisfy one has

Step 2. By (24), (25), (32), and It formula (Theorem 6.2, [27, Page 32]), one gets
To deal with the uncertain parameter , define parameter and being the parameter estimation error, being the estimate of . Define the second Lyapunov candidate function:
From (33), (35), and Definition 1, one can obtain
By (34), (36), and Lemma 5, we have the following inequalities:
Substituting the above inequalities into (36) and adding and subtracting the term on the right-hand side of (36), we have where
One can choose the actual control law and the adaptive laws as follows:
Substituting (40) into (38), one gets
Choosing the Lyapunov function as together with (19) and (41), we have

Theorem 16. If Assumptions 9 and 10 hold, one can choose positive constants , , , , and  , with   and satisfying and (31); for positive constant , one has the following.(i)The closed-loop system composed by (13a), (17), (24), and (40) has an almost surely unique solution on for and .(ii)The equilibrium of the closed-loop system is globally stable in probability.(iii)For initial condition , , and , , , where and .

Proof. From conditions in Theorem 16, it is easy to see that constants , , and . So, in (43) becomes the same form as (3.19) in [21]. Using (43) and Lemma 4, Theorem 16 can be proved.

4. Switching Control Stability

In Section 2, the case of is discussed. We design controllers and for systems (13a) and (13b) as in (17) and (40), respectively. Now we turn to the case of . When the initial , one can choose an open loop control to drive the state away from zero in a limited time.

In fact, when we choose an open loop control , system (13a) will be in the following form: For a given constant , define a stopping time . With the similar analysis in Section V in [22], we have , which means that for any . Letting , it is easy to see that So, there exists , such that . After that, at the time , we switch the control inputs and to (17) and (40) in , respectively.

Theorem 17. If Assumptions 9 and 10 hold, one can apply the following switching control procedure to system (11): (i)when the initial state belongs to (ii)when the initial state belongs to One designs control inputs and in form (17) and (40), respectively; for , one can choose the control law and ; for , at the time , one switches the control inputs and to (17) and (40), respectively.
Then, for any initial condition in the state space, the states of system (11) are asymptotically regulated to zero in probability.

Proof. Firstly, we consider the case that the initial state belongs to From Theorems 12 and 16, for the closed-loop system composed by (13a), (17), (24), and (40), states and are regulated to zero in probability, is bounded in probability, and . This implies that states and are globally asymptotically regulated to zero in probability and bounded in probability. As a result of (23), one gets that the states , , and of closed-loop system composed by (11), (17), and (40) asymptotically converge to zero in probability and all bounded in probability. By orthogonal transformation (12), one can obtain that the states , , and of closed-loop system composed by (11), (17), and (40) are asymptotically stabilized in probability.
Secondly, when the initial state belongs to we use the constant control in order to drive far away from the origin, which guarantees that all the signals are bounded in probability during . Then, in view of , the switching control strategy is applied to system (11) at the time instant . This completes the proof.

5. A Simulation Example

Consider the system (11) with and . In simulation, one can choose , , , , , , , , , , and and the initial values , , , and . Figures 1, 2, and 3 give the responses of the closed-loop system consisting of (11), (17), and (40).

From Figure 1, it is easy to see that the states , , and are asymptotically regulated to zero in probability in spite of the stochastic disturbances. As shown in Figure 2, the control inputs and are convergent to a small neighborhood of zero asymptotically. Figure 3 indicates that the estimated parameter is bounded.

6. Conclusions

In this paper, we extend the nonholonomic mobile robots with unknown parameters to the stochastic case. Based on backstepping technique, adaptive state-feedback stabilizing controllers are designed for stochastic nonholonomic mobile robots with unknown parameters. A switching control strategy for the original system is given, which guarantees that the states of closed-loop system are asymptotically stabilized at the zero equilibrium point in probability.

There exist some problems to be discussed, for example, how to design the controller for the dynamic stochastic nonholonomic systems with unknown parameters.

Acknowledgments

The authors would like to express sincere gratitude to the editor and reviewers’ hard work. This paper is supported by the National Science Foundation (no. 61304004) and Natural Science Foundation of Hebei Province of China (no. A2014106035), Doctoral Natural Science Foundation of Shijiazhaung University.