Research Article | Open Access

# Algorithms to Solve Stochastic Control with State-Dependent Noise

**Academic Editor:**Wuquan Li

#### Abstract

This paper is concerned with the algorithms which solve control problems of stochastic systems with state-dependent noise. Firstly, the algorithms for the finite and infinite horizon control of discrete-time stochastic systems are reviewed and studied. Secondly, two algorithms are proposed for the finite and infinite horizon control of continuous-time stochastic systems, respectively. Finally, several numerical examples are presented to show the effectiveness of the algorithms.

#### 1. Introduction

Mixed control is an important robust control method and has been extensively investigated by many researchers [1–4]. Compared with the sole control, the mixed control is more attractive in engineering practice [4], since the former is a worst-case design which tends to be conservative while the latter minimizes the average performance with a guaranteed worst-case performance. Recently, stochastic control for continuous- and discrete-time systems with multiplicative noise has become a popular topic and has attracted a lot of attention [5–7]. In [5], the finite and infinite horizon control problems were discussed for continuous-time stochastic systems with state-dependent noise. The finite and infinite horizon control problems were solved for discrete-time stochastic systems with state and disturbance dependent noise by [6] and [7], respectively. Moreover, mixed control was widely studied for stochastic systems with Markov jumps and multiplicative noise [8–11] due to their powerful modeling ability in many fields [12, 13].

Generally, the existence of a controller is equivalent to the solvability of several coupled matrix-valued equations. However, it is difficult to solve these coupled matrix-valued equations analytically. Several numerical algorithms have appeared in dealing with deterministic and stochastic control. In [1], the finite horizon controller for continuous-time deterministic systems was obtained by using the Runge-Kutta integration procedure. In [14], an exact solution to the suboptimal deterministic control problem was studied via convex optimization. Two iterative algorithms were proposed for finite and infinite horizon control of discrete-time stochastic systems in [6] and [7], respectively. In [15], an iterative algorithm was proposed to solve a kind of stochastic algebraic Riccati equation in LQ zero-sum game problems.

However, most of these algorithms were concerned with the control for discrete-time systems. Up to now, the algorithm for stochastic control of continuous-time systems has received little research attention. This is because the coupled matrix-valued equations for the continuous-time control cannot be solved by recursive algorithms as in the discrete-time case. In this paper, we will study the algorithms to solve control problems for stochastic systems with state-dependent noise. Firstly, the algorithms for finite and infinite horizon control of discrete-time stochastic systems are reviewed. An iterative algorithm is presented to solve the infinite horizon control of discrete-time time-varying stochastic systems. For continuous-time stochastic systems, two algorithms are proposed for the finite and infinite horizon control, respectively. Some numerical examples are presented to illustrate the developed algorithms.

For conveniences, we make use of the following notations throughout this paper: : -dimensional Euclidean space; : the set of all symmetric matrices; : is a positive definite (positive semidefinite) symmetric matrix; : the transpose of a matrix ; : the identity matrix; : the trace of matrix ; : the mathematical expectation of .

#### 2. Preliminaries

In this section, we will present some preliminary results for stochastic control, including the finite horizon case for discrete-time time-varying systems, the infinite horizon case for discrete-time time-invariant systems, the finite horizon case for continuous-time time-varying systems, and the infinite horizon case for continuous-time time-invariant systems.

Consider the following discrete-time time-varying stochastic system with state-dependent noise: where , , , and are, respectively, the system state, control, disturbance signal, and output; is a sequence of independent white noise defined on the filtered probability space with and , where is a Kronecker function defined by for while for . is assumed to be deterministic for simplicity purposes. , , , , and are matrix-valued continuous functions of appropriate dimensions.

Lemma 1 (see [6]). *For given , the finite horizon control for system (1) has solutions as , and , with and being continuous matrix-valued functions, if and only if the following coupled difference matrix-valued equations
**
with , admit a bounded solution , .*

Consider the following discrete-time time-invariant stochastic systems with state-dependent noise:

Briefly, system (6) can be denoted by , and similar notations will be used in the following section.

Lemma 2 (see [7]). *Suppose that and are exactly observable. For given , the infinite horizon control for system (6) has solutions as , and , if and only if the following coupled algebraic matrix-valued equations
**
with , have a solution .*

Consider the following continuous-time time-varying stochastic system with state-dependent noise: where , , , and are, respectively, the system state, control, disturbance signal, and output. is a standard one-dimensional Wiener process defined on the filtered probability space with . is assumed to be deterministic for simplicity purposes. , , , , and are matrix-valued continuous functions of suitable dimensions.

Lemma 3 (see [5]). *For given , the finite horizon control for system (11) has solutions as , and , with and being continuous matrix-valued functions, if and only if the following coupled differential matrix-valued equations
**
have a bounded solution , with and , .*

Consider the following continuous-time time-invariant stochastic system with state-dependent noise:

Lemma 4 (see [5]). *Suppose that and are exactly observable. For given , the infinite horizon control for system (14) has solutions as and , if and only if the following coupled algebraic matrix-valued equations
**
have a solution , with , .*

#### 3. Discrete-Time Case

In [6, 7], Zhang et al. provided the recursive algorithms to solve the coupled matrix-valued equations in Lemmas 1 and 2, respectively. Based on those results, this paper will present an algorithm to solve the infinite horizon control of discrete-time time-varying stochastic systems.

The following algorithm can be used to solve the coupled difference matrix-valued equations (2)–(5) in Lemma 1 [6].

*Algorithm 5. *
Consider the following.(i)Set , then and can be computed according to the final conditions and .(ii)Solve the matrix recursions (3) and (5), then and are derived.(iii)Substituting the obtained and into the matrix recursions (2) and (4), respectively, then and are available.(iv)Repeat the above procedures; can be computed for , recursively.

In Algorithm 5, the priori condition should be checked to guarantee it to proceed backward. Otherwise, the algorithm has to stop. It is noted that and can be computed first, provided that the final conditions and are known.

The following algorithm can be used to solve the coupled algebraic matrix-valued equations (7)–(10) in Lemma 2 [7].

*Algorithm 6. *Consider the following.(i)Establish difference equations (2)–(5) corresponding to algebraic equations (7)–(10).(ii)Give a large . By means of Algorithm 5, the difference equations (2)–(5) can be solved and can be derived.(iii)If the sequences are convergent, then (7)–(10) have solutions . Otherwise, the problem is unsolvable.

In [10], a necessary and sufficient condition for the infinite horizon control problem of discrete-time time-varying stochastic systems with Markov jumps was derived in terms of four coupled discrete-time Riccati equations. However, the Riccati equations in [10] were solved by trial and error and cannot be extended to the complicated case. The condition for the infinite horizon control of time-varying stochastic system (or system (1)) is as follows.

Lemma 7. *For systems , assume that and are stochastically detectable. The infinite horizon control problem has solutions , and , with and being continuous matrix-valued functions, if and only if the following coupled difference matrix-valued equations
**
with , admit a bounded solution , .*

*Proof. *This is a direct corollary of Theorem 2 in [10] and the proof is omitted.

In this paper, the essential difference between Lemmas 1 and 7 is that is finite in the former while it is infinite in the later. Based on Algorithm 6, the coupled matrix-valued equations (17)–(20) can be solved by the following recursive algorithm.

*Algorithm 8. *Consider the following.(i)Given , (17)–(20) reduce to time-invariant matrix-valued equations.(ii)Compute the solution of these time-invariant matrix-valued equations by using Algorithm 6.(iii)Set and go to step 1.

It is difficult for Algorithm 8 to compute all the solutions as for general time-varying system. However, it is easy to verify that the solutions of (17)–(20) are also periodic for periodic systems. Hence, Algorithm 8 is suitable for the periodic case, which will be shown by Example 1.

#### 4. Continuous-Time Case

In contrast to the discrete-time case, it is more difficult to deal with the continuous-time stochastic control in Lemmas 3 and 4. In this study, the Runge-Kutta integration procedure and the convex optimization approach are applied to solve the coupled matrix-valued equations in Lemmas 3 and 4, respectively.

In Lemma 3, the coupled differential matrix-valued equations (12) and (13) can be viewed as a set of backward differential equations with known terminal conditions, which can be solved by the Runge-Kutta integration procedure [1]. The following algorithm can be used to solve (12) and (13) in Lemma 3.

*Algorithm 9. *Consider the following.(i)Rewrite (12) and (13) as a set of equations with time-varying backward differential equations with known terminal conditions.(ii)Solve this set of equations by using the Runge-Kutta integration procedure.(iii)If the solutions of the set of equations are convergent, then the finite horizon control problem is solvable. Otherwise, the problem is unsolvable.

Next, we will study the algorithm for the solution of coupled algebraic matrix-valued equations (15) and (16) in Lemma 4. In the scalar case, the curves represented by (15) and (16) can be plotted in a -plane, and the intersections of these curves, if they exist, are the solutions of (15) and (16). Moreover, the intersection in the second quadrant is the solution that we need, which will be shown in Example 2.

In the high-dimensional case, a suboptimal controller design algorithm for Lemma 4 was obtained in [8] by solving a convex optimization problem. However, this algorithm was developed under the assumption which was very conservative. Rewrite (15) and (16) as Substituting into (21) yields From the above, it can be seen that one matrix cannot satisfy two different equations simultaneously expect in some very special cases.

In this paper, we try to present another convex optimization algorithm to solve (15) and (16). By Theorem 10 of [16], is the optimal solution to Since we have and if respectively. According to Schur’s complement lemma, and are, respectively, equivalent to with Since (26) are linear matrix inequalities (LMIs), a suboptimal solution to coupled matrix-valued equations (21) may be derived by solving the following convex optimization problem: Moreover, the infinite horizon control problem of system (14) has a pair of solutions:

Summarizing the above, the following algorithm can be used to solve (15) and (16) in Lemma 4.

*Algorithm 10. *Consider the following.(i)Establish LMIs (26) corresponding to algebraic equations (15) and (16) in Lemma 4.(ii)If the convex optimization problem (28) is solvable, then and can be derived. Moreover, and can be computed. Otherwise, (15) and (16) in Lemma 4 are unsolvable.

*Remark 11. *Note that, in Algorithm 10, conditions (26) are given in terms of linear matrix inequalities; therefore, by using the Matlab LMI-Toolbox, it is straightforward to check the feasibility of the convex optimization problem (28) without tuning any parameters. In fact, Algorithm 10 is also a suboptimal algorithm, and the conservatism comes from the inequality transforms (24).

*Remark 12. *In this paper, we consider the control for stochastic systems with only state-dependent noise. As discussed in [17, 18], for most natural phenomena described by Itô stochastic systems, not only state but also control input or external disturbance maybe corrupted by noise. Therefore, it is necessary to study stochastic systems with state, control, and disturbance-dependent noise which makes the conditions for control more complicated. Searching for the numerical solutions for these conditions deserves further study.

#### 5. Numerical Examples

In this section, several numerical examples will be provided to illustrate the effectiveness of Algorithms 8–10.

*Example 1. *Consider the infinite horizon control for two-dimensional periodic stochastic systems (1) with the following parameters:
. Apparently, the period of this system is . By setting and applying Algorithm 8, the evolutions of , , , , are illustrated in Figures 1 and 2, respectively, which clearly show the convergence of the algorithm.

*Example 2. *Consider the finite horizon control for the following one-dimensional stochastic system:

According to Algorithm 9, coupled differential equations (12) and (13) can be viewed as the following set of equations with known terminal conditions:
Setting and using the Runge-Kutta integration procedure, the evolutions of , , , are given in Figure 3, which clearly show the convergence of the solutions of (12) and (13).

On the other hand, for one-dimensional time-invariant system (31), the infinite horizon control can be solved by searching for the intersection in the second quadrant of the curves represented by (15) and (16). From Figure 4, it can be found that the solution of (15) and (16) is , , which coincides with the and in Figure 3. Therefore, algebraic matrix-valued equations (15) and (16) can be solved by computing the initial conditions of the corresponding differential matrix-valued equations (12) and (13), which will be called “initial condition method” in the following analysis.

*Example 3. *Consider the finite horizon control for two-dimensional time-varying stochastic systems (11) with the following parameters:
In this case, let , , and (12) and (13) correspond to a set of equations with differential equations.

Set , . By applying Algorithm 9, the evolutions of , , , are shown in Figure 5.

*Example 4. *Consider the infinite horizon control for three-dimensional stochastic systems (14) with the following parameters:
According to Algorithm 10, by solving the convex optimization problem (28), we have the following solutions to (15) and (16):

*Example 5. *Consider the infinite horizon control for two-dimensional stochastic systems (14) with the following parameters:
In this example, (15) and (16) will be solved by two different methods, that is, the initial condition method and Algorithm 10. Set and . By using Algorithm 9, the convergence of the solutions to (12) and (13) is shown in Figure 6, and the initial conditions of and are as follows:

On the other hand, according to Algorithm 10, we have the following solutions to (15) and (16):

*Remark 13. *Substituting the solutions from initial condition method and those from Algorithm 10 into (15) and (16), it can be found that the former has a higher accuracy than the later. Moreover, the initial condition method is less conservative than Algorithm 10 in some cases. For instance, the infinite horizon control of system (31) can be solved by initial condition method (see Example 2), while there is no optimization solution by using Algorithm 10. However, Algorithm 10 has more advantages in the high-dimensional case than initial condition method. For example, it is difficult to deal with the problem in Example 4 for initial condition method, since it needs to solve a set of equations with differential equations. Therefore, each method has its own advantage and proper scope.

#### 6. Conclusions

In this paper, we have studied the algorithms for control problems of stochastic systems with state-dependent noise. For the finite and infinite horizon stochastic control problems, algorithms in the discrete-time case have been reviewed and studied, and algorithms in the continuous-time case have been developed. The validity of the obtained algorithms has been verified by numerical examples. This subject yields many interesting and challenging topics. For example, how can we design numerical algorithms to solve the control problems of stochastic systems with state, control, and disturbance-dependent noise? This issue deserves further research.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This work is supported by the National Natural Science Foundation of China (nos. 61203053 and 61174078), China Postdoctoral Science Foundation (no. 2013M531635), Special Funds for Postdoctoral Innovative Projects of Shandong Province (no. 201203096), Research Fund for the Taishan Scholar Project of Shandong Province of China, and SDUST Research Fund (no. 2011KYTD105).

#### References

- D. J. N. Limebeer, B. D. O. Anderson, and B. Hendel, “A nash game approach to mixed ${H}_{\text{2}}\text{/}{H}_{\infty}$ control,”
*IEEE Transactions on Automatic Control*, vol. 39, no. 1, pp. 69–82, 1994. View at: Publisher Site | Google Scholar | MathSciNet - R. Muradore and G. Picci, “Mixed ${H}_{2}/{H}_{\infty}$ control: the discrete-time case,”
*Systems & Control Letters*, vol. 54, no. 1, pp. 1–13, 2005. View at: Publisher Site | Google Scholar | MathSciNet - O. L. V. Costa and R. P. Marques, “Mixed ${H}_{2}/{H}_{\infty}$ control of discrete-time Markovian jump linear systems,”
*IEEE Transactions on Automatic Control*, vol. 43, pp. 95–100, 1998. View at: Google Scholar - C. Du, L. Xie, J. N. Teoh, and G. Guo, “An improved mixed ${H}_{2}/{H}_{\infty}$ control design for hard disk drives,”
*IEEE Transactions on Control Systems Technology*, vol. 13, no. 5, pp. 832–839, 2005. View at: Publisher Site | Google Scholar - B.-S. Chen and W. Zhang, “Stochastic ${H}_{\text{2}}\text{/}{H}_{\infty}$ control with state-dependent noise,”
*IEEE Transactions on Automatic Control*, vol. 49, no. 1, pp. 45–57, 2004. View at: Publisher Site | Google Scholar | MathSciNet - W. Zhang, Y. Huang, and H. Zhang, “Stochastic ${H}_{2}$/${H}_{\infty}$ control for discrete-time systems with state and disturbance dependent noise,”
*Automatica*, vol. 43, no. 3, pp. 513–521, 2007. View at: Publisher Site | Google Scholar - W. Zhang, Y. Huang, and L. Xie, “Infinite horizon stochastic ${H}_{2}/{H}_{\infty}$ control for discrete-time systems with state and disturbance dependent noise,”
*Automatica*, vol. 44, no. 9, pp. 2306–2316, 2008. View at: Publisher Site | Google Scholar | MathSciNet - Y. Huang, W. Zhang, and G. Feng, “Infinite horizon ${H}_{2}$/${H}_{\infty}$ control for stochastic systems with Markovian jumps,”
*Automatica*, vol. 44, no. 3, pp. 857–863, 2008. View at: Publisher Site | Google Scholar | MathSciNet - T. Hou, W. Zhang, and H. Ma, “Finite horizon ${H}_{\text{2}}\text{/}{H}_{\infty}$ control for discrete-time stochastic systems with Markovian jumps and multiplicative noise,”
*IEEE Transactions on Automatic Control*, vol. 55, no. 5, pp. 1185–1191, 2010. View at: Publisher Site | Google Scholar | MathSciNet - H. Ma, W. Zhang, and T. Hou, “Infinite horizon ${H}_{2}$/${H}_{\infty}$ control for discrete-time time-varying Markov jump systems with multiplicative noise,”
*Automatica*, vol. 48, no. 7, pp. 1447–1454, 2012. View at: Publisher Site | Google Scholar | MathSciNet - L. Sheng, W. Zhang, and M. Gao, “Relationship between Nash equilibrium strategies and
*H*_{2}/${H}_{\infty}$ control of stochastic Markov jump systems with multiplicative noise,”*IEEE Transactions on Automatic Control*, 2014. View at: Publisher Site | Google Scholar - X. Mao and C. Yuan,
*Stochastic Differential Equations with Markovian Switching*, Imperial College Press, London, UK, 2006. View at: Publisher Site | MathSciNet - V. Dragan, T. Morozan, and A. M. Stoica,
*Mathematical Methods in Robust Control of Discrete-Time Linear Stochastic Systems*, Springer, New York, NY, USA, 2010. View at: Publisher Site | MathSciNet - M. Sznaier and H. Rotstein, “An exact solution to general 4-blocks discrete-time mixed ${H}_{2}$/${H}_{\infty}$ problems via convex optimization,” in
*Proceedings of the American Control Conference*, pp. 2251–2256, July 1994. View at: Google Scholar - Y. Feng and B. D. O. Anderson, “An iterative algorithm to solve state-perturbed stochastic algebraic Riccati equations in LQ zero-sum games,”
*Systems & Control Letters*, vol. 59, no. 1, pp. 50–56, 2010. View at: Publisher Site | Google Scholar | MathSciNet - M. A. Rami and X. Y. Zhou, “Linear matrix inequalities, Riccati equations, and indefinite stochastic linear quadratic controls,”
*IEEE Transactions on Automatic Control*, vol. 45, no. 6, pp. 1131–1143, 2000. View at: Publisher Site | Google Scholar | MathSciNet - W. Zhang and G. Feng, “Nonlinear stochastic ${{H}_{2}/H}_{\infty}$ control with (
*x*,*u*,*v*)-dependent noise: infinite horizon case,”*IEEE Transactions on Automatic Control*, vol. 53, no. 5, pp. 1323–1328, 2008. View at: Publisher Site | Google Scholar | MathSciNet - W. Zhang, B. Chen, H. Tang, L. Sheng, and M. Gao, “Some remarks on general nonlinear stochastic ${H}_{\infty}$ control with state, control, and disturbance-dependent noise,”
*IEEE Transactions on Automatic Control*, vol. 59, no. 1, pp. 237–242, 2014. View at: Publisher Site | Google Scholar | MathSciNet

#### Copyright

Copyright © 2014 Ming Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.