Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 265621, 11 pages
http://dx.doi.org/10.1155/2014/265621
Research Article

Stability and Linear Quadratic Differential Games of Discrete-Time Markovian Jump Linear Systems with State-Dependent Noise

College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao 266590, China

Received 7 July 2014; Accepted 6 September 2014; Published 23 November 2014

Academic Editor: Ramachandran Raja

Copyright © 2014 Huiying Sun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We mainly consider the stability of discrete-time Markovian jump linear systems with state-dependent noise as well as its linear quadratic (LQ) differential games. A necessary and sufficient condition involved with the connection between stochastic -stability of Markovian jump linear systems with state-dependent noise and Lyapunov equation is proposed. And using the theory of stochastic -stability, we give the optimal strategies and the optimal cost values for infinite horizon LQ stochastic differential games. It is demonstrated that the solutions of infinite horizon LQ stochastic differential games are concerned with four coupled generalized algebraic Riccati equations (GAREs). Finally, an iterative algorithm is presented to solve the four coupled GAREs and a simulation example is given to illustrate the effectiveness of it.

1. Introduction

In this paper we discuss linear systems with Markovian jump and state-dependent noise. Here the discrete-time stochastic linear systems subject to abrupt parameter changes can be modeled by a discrete-time finite-state Markov chain. They are a special sort of hybrid systems with both modes and state variables. Since the class of systems was firstly introduced in early 1960s, the hybrid systems driven by continuous-time Markov chains have been broadly employed to model many practical systems which may experience abrupt changes in system structure and parameters such as solar-powered systems, power systems, battle management in command, and control and communication systems [14]. In the past several decades, considerable attention has been focused on the analysis and synthesis of linear systems with Markovian jump, including stability analysis, state feedback, and output feedback controller design, filter design, and so forth [511].

The stability theory of linear systems with Markovian jump and state-dependent noise, here we also say Markovian jump stochastic linear systems (MJSLS for short), is rather complex in that there exist some stability concepts. Particularly, the study of stability about these systems has attracted the attention of many researchers [1217]. The very important stability notions are mean-square stability, moment stability, and almost sure stability. Mean-square stability deals with the asymptotic convergence to zero of the second moment of the state norm. There are some necessary and sufficient conditions for mean-square stability involving either the solution of the coupled Lyapunov equations or the location in the complex plane of the eigenvalues of suitable augmented matrices [12, 13]. Moment stability, -moment stability, requires the convergence to zero of -moment of the state norm (mean-square stability is just a particular case for ). Although there exist some practical sufficient conditions, a simple necessary and sufficient condition testing -moment stability is not available (except for ). Almost sure stability holds if the sample path of the state converges to zero with probability one. The checking about almost sure stability involves the determination of the sign of the top Lyapunov exponent, which is usually a rather difficult task [14, 15]. Contrary to deterministic systems, for which all moments are stable whenever the sample path is stable, the moment stability for stochastic systems implies almost sure stability, but not vice versa, as pointed out by [16].

On the other hand, the LQ differential games have many applications in the economy, military, and intelligent robots. Since the book [18], entitled “Differential Games,” written by Dr. Isaacs came out, theory and application of differential games have been developed greatly. A differential game is a mathematical model that represents a conflict between different agents which control a dynamical system and each of them is trying to minimize his individual objective function by giving a control to the system. In fact, many situations in industry, economies, management, and elsewhere are characterized by multiple decision makers and enduring consequences of decisions which can be treated as dynamic games. Particularly applications of differential games are widely researched in LQ control problem [1923]. By solving the LQ control problems, players can avoid most of the additional cost incurred by this perturbation. The authors consider the zero-sum, infinite-horizon, and LQ differential games in [19]. A sufficient condition for the LQ differential games is applied to the optimization problem in [20]. The authors in [21] study the problem of designing suboptimal LQ differential games with multiple players. In [22], the authors study the LQ nonzero sum stochastic differential games problem with random jump. A leader-follower stochastic differential game is considered with the state equation being a linear Itô-type stochastic differential equation and the cost functionals being quadratic [23].

As far as we know, there are few researchers paying attention to the discrete-time stochastic LQ differential games, especially MJSLS. Thus, it is significant to consider these systems. In [24], we have considered stochastic differential games in infinite-time horizon. By introducing stochastic exact observability and stochastic exact detectability, the optimal strategies (Nash equilibrium strategies) and the optimal cost values have been given. In [25], we have considered LQ differential games in finite horizon for discrete-time stochastic systems with Markovian jump parameters and multiplicative noise. Furthermore, a suboptimal solution of the LQ differential games is proposed based on a convex optimization approach. On the basis of [26], in the paper we further investigate the LQ differential games for discrete-time MJSLS with a finite number of jump times. It gives the optimal strategies and the optimal cost values for infinite horizon LQ stochastic differential games which are associated with the four coupled GAREs. Generally, it is difficult to solve the four coupled GAREs analytically. Here, we will solve the LQ differential games by means of stochastic -stability and we will employ a recursive procedure to solve the coupled GAREs.

The paper is organized as follows. In Section 2, some basic definitions are recalled. A necessary and sufficient condition in relation to the connection between the -stability for MJSLS and Lyapunov equation is presented. In Section 3, we formulate LQ differential games with quadratic cost function for the MJSLS and propose the stochastic -stability of discrete-time MJSLS with a finite number of jump times, which is essential to obtain the main results. An iterative algorithm for solving the four coupled GAREs is put forward, and an illustrative example is also displayed in Section 4. Section 5 ends this paper with some concluding remarks.

For convenience, the paper adopts the following basic notations. It uses to denote the linear space of all -dimensional real vectors. denotes the linear space of all real matrices. . indicates the transpose of matrix and    represents a nonnegative definite matrix (positive definite). The standard vector norm in is indicated by and the corresponding induced norm of matrix by . represents the space of -valued, square integrable random vectors and is the Dirac measure. Finally, we write instead of , and we define the following operator: for .

2. Stochastic -Stability for Discrete-Time MJSLS with a Finite Number of Jump Times

Let be a given fundamental probability space where there exist a Markov chain and a sequence of real random variables . denotes the -algebra generated by and ; that is, . Consider the following MJSLS defined on the space : where are the states of process with values in ; is a time homogeneous Markov chain taking values in a finite set , with initial distribution and transition probability matrix , where The set comprises the various operation modes of the system (1). For each , the matrices and (associated with “th” mode) will be assigned as and in someplace of the paper. is a sequence of real random variables, which is also a wide sense stationary, second-order process with and , where refers to a Kronecker function; that is, if and if . The MJSLS as defined is trivially a strong Markov process. We assume that is independent of . To make the operation more convenient, let . When system (1) is stable, we also say stable for short.

Although, several concepts of stochastic stability can be found in the literature, in this paper, the stochastic stability concept associated with the stopping times for MJSLS is researched. The stopping times in relation to jump times are defined as follows: The stopping time may represent interesting situations from the point of view of applications. For instance, it can be the accumulated nth failure and repair of the system. In another situation, the stopping time can represent the occurrence of a “crucial failure” (which may happen after a random number of failures).

This class of stochastic systems is associated with systems subject to failures in their components or connections according to a Markov chain. The situation that we are interested in arises when one wishes to study the stability of such a system until the occurrence of a fixed number of failures and repairs. The paper recognizes the sequence of the stopping times containing the successive times of the occurrence of such failures and then it studies the stability of system (1) according to these stopping times.

Definition 1 (see [27]). Consider a stopping time . The MJSLS (1) is(i)stochastically -stable if for each initial condition and initial distribution (ii)mean-square -stable if for each initial condition and initial distribution

Lemma 2 (see [27]). For all and

Remark 3. and , whenever and , respectively. That is in any case system will jump to another state.

Next, we will give an important theorem that will be used later.

Theorem 4. The MJSLS (1) is -stable if and only if, for any given set of matrices , there exists a unique set of matrices , satisfying the Lyapunov equations

Proof. Sufficiency. In the proof we employ an induction argument on the stopping times . First, define the function where , and is the solution of The existence of such relies on (7). Hence, to functional in the following operation, we can derive We know that and , where if and if . Calculating the expected values above, we can obtain that due to , and .
The above relation can be rewritten as where Now, let us observe that By applying (12) and considering that for each initial condition and initial distribution , then we have for some . Because , then by (5). Finally, it is easy to verify that, for any , holds from (15). Therefore, for any , MJSLS (1) is stable according to (i) of Definition 1.
Now, using an induction argument, we assume that for some the inequality holds and thus, by setting , . However, Notice that using the strong Markov property and the homogeneity property, the second term conditioned to the knowledge of () can be written as So one can conclude from (18) and (19) that Therefore, for any , indicate that the MJSLS (1) is -stable.
Necessity. As in the previous part define the functional for all . Therefore, The right-hand side of (22) can be expressed as In addition, Thus, based on the strong Markov property, applying homogeneity in (25) and introducing it in (24), we arrive at Since is arbitrary, and calculating the expected values above, (26) implies that using the fact that and . Thus, from the Lyapunov stability theory, the existence of the set satisfying (7) is guaranteed, completing the proof for .
Now, for the general case, from the stochastically -stable of the system we can obtain that And from the strong Markov property, we can deduce that for . By the homogeneity property, it follows that (29) is equivalent to (22) with and , and the existence of a set of matrices satisfying (7) is assured. Then, the proof of Theorem 4 is completed.

3. LQ Differential Games for MJSLS with a Finite Number of Jump Times

3.1. Problem Formulation

Now we study the LQ differential games for discrete-time MJSLS. Comparing with system (1), consider the following discrete-time MJSLS with a finite number of jump times: are the measurement outputs for each player. Here represent the system control inputs. The matrices (associated with “th” mode) will be assigned as for each .

Throughout this paper, we choose the infinite horizon quadratic cost functions associated with each player: The weighting matrices , and .

So we are looking for actions that satisfy simultaneously where .

To ensure the finiteness of the infinite-time cost function, we restrain the admissible control set to the constant linear feedback strategies; that is, , , where and are constant matrices of appropriate dimensions, and belong to

We say that the optimization problem is well posed and the and have the following two additional properties:

The optimal strategies and determined by (32) are also called the Nash equilibrium strategies . In order to guarantee the unique global Nash game solutions, both the players are only allowed to take constant feedback controls. Next we focus on finding the optimal strategies.

3.2. Main Results

First, we give an important lemma that will be used later. If the system (1) is -stable, we can obtain the following result for the discrete-time MJSLS (30).

Lemma 5. If is -stable, then so is , where .

Proof. Sufficiency. The proof employs an induction argument on the stopping times . First, define the function where , and is the solution of The existence of such relies on (7). Hence, to the function and the system (30) in the following operation, we acquire that Compared with (12), we know where Considering that , we can obtain (19). And because for each initial condition , from (40), it is easy to verify (21). Therefore, the MJSLS (30) is -stable.

Theorem 6. For system (30), suppose the following coupled equations admit the solutions with , : where If is -stable, then(i);(ii)the problem of infinite horizon stochastic differential games admits a pair of solutions with , ;(iii)the optimal cost functions incurred by playing strategies are .

Proof. In the deduction of Lemma 5, we can prove that (i) is correct. Next what we have to do is to prove (ii) and (iii). In the light of the Lyapunov equation (7) and any given set of matrices in Theorem 4, it is easy to get the following equations for system (30): By rearranging (45) and (46), (40) and (42) can be obtained, respectively.
Noting , and by substituting into (30), it is easy to get the following system: Then, considering the scalar function , we have Due to by (40) and a completing squares technique, (31) can be derived that Then by (32), it follows that and . Finally, by substituting into (30), in the same way as before, we have and .

Theorem 7. If is -stable, and, for system (30), assume that (40)–(43) admit the solution with , then(i), ,(ii)the problem of infinite horizon stochastic differential games admits a pair of solutions with , ,(iii)the optimal cost functions incurred by playing strategies are .

Remark 8. When , these results still hold in the paper. Only for the reason of simplicity, in (1) and (30), we assume the state and control inputs depend on the same noise . If they rely on the different noises , then new results will be yielded. The discussion is omitted.

4. Iterative Algorithm and Simulation

4.1. An Iterative Algorithm

In this section, an iterative algorithm is proposed to solve the four coupled GAREs (40)–(43). Infinite horizon Riccati equations are hard to be solved; hence the particular problems can be solved via finite horizon equations. represents the finite number of iterations in the following equations: where An iterative process for solving (40)–(43) based on the above recursions is presented as follows.(a)Given appropriate natural number and the initial conditions , , , and .(b)Through the numerical values of , , , and , we have , , , and according to (55).(c) and can be, respectively, computed by (52) and (54). Then, and can also be, respectively, obtained by (51) and (53).(d)Let , , , and .(e)Then, . Repeat steps (b)–(d) until the number of iterations is . We can finally obtain the numerical values of , , , and .

As in [28], under the assumptions of stabilizability, for any , Therefore, where are the solutions of (40)–(43).

4.2. A Simulation Example

To verify the efficiency of the above iterative algorithm, we consider the following 2-D example. In the system (30), we set , , For convenience, let , , and . When , by applying the above iterative algorithm, we obtain the solutions of the four coupled equations (51)–(54) as follows: are also the solutions of (40)–(43) according to (57). By the solutions, it shows that and . The evolution of is exhibited in Figures 1 and 2. And the figures clearly illustrate the convergence and speediness of the backward iterations. When , it is easy to get that are also the solutions of (40)–(43). And and . Because it is the same as the above process , we do not introduce it again due to space limitations.

265621.fig.001
Figure 1: Evolution of and .
265621.fig.002
Figure 2: Evolution of and .

5. Conclusions

In this paper we have discussed the -stability for the discrete-time MJSLS with a finite number of jump times and its infinite horizon LQ differential games. Based on the relations between the Lyapunov equation and the stability of discrete-time MJSLS, we have obtained some useful theorems on finding the solutions of the LQ differential games. Moreover, an iterative algorithm has been presented for the solvability of the four coupled equations. Finally, a numerical example is offered to demonstrate the efficiency of the algorithm. Exact observability and -observability for discrete-time MJSLS are investigated by [29, 30]. On the basis of exact observability and -observability, infinite horizon stochastic differential games should be discussed and we will do further research in the future.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (nos. 61304080 and 61174078), a Project of Shandong Province Higher Educational Science and Technology Program (no. J12LN14), the Research Fund for the Taishan Scholar Project of Shandong Province of China, and the State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources (no. LAPS13018).

References

  1. M. Mariton, Jump Linear Systems in Automatic Control, CRC Press, 1990.
  2. M. K. Ghosh, A. Arapostathis, and S. I. Marcus, “Optimal control of switching diffusions with application to flexible manufacturing systems,” SIAM Journal on Control and Optimization, vol. 31, no. 5, pp. 1183–1204, 1993. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. E. K. Boukas, Z. K. Liu, and G. X. Liu, “Delay-dependent robust stability and H control of jump linear systems with time-delay,” International Journal of Control, vol. 74, no. 4, pp. 329–340, 2001. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. X. R. Mao, “Exponential stability of stochastic delay interval systems with Markovian switching,” IEEE Transactions on Automatic Control, vol. 47, no. 10, pp. 1604–1612, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. T. Morozan, “Stability and control for linear systems with jump Markov perturbations,” Stochastic Analysis and Applications, vol. 13, no. 1, pp. 91–110, 1995. View at Publisher · View at Google Scholar · View at MathSciNet
  6. O. L. Costa and M. D. Fragoso, “Discrete-time LQ-optimal control problems for infinite Markov jump parameter systems,” IEEE Transactions on Automatic Control, vol. 40, no. 12, pp. 2076–2088, 1995. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. R. Rakkiyappan, Q. Zhu, and A. Chandrasekar, “Stability of stochastic neural networks of neutral type with Markovian jumping parameters: a delay-fractioning approach,” Journal of the Franklin Institute, vol. 351, no. 3, pp. 1553–1570, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. D. Yue and Q.-L. Han, “Delay-dependent exponential stability of stochastic systems with time-varying delay, nonlinearity, and Markovian switching,” IEEE Transactions on Automatic Control, vol. 50, no. 2, pp. 217–222, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. Y. Zhang, P. Shi, S. Kiong Nguang, and H. R. Karimi, “Observer-based finite-time fuzzy H control for discrete-time systems with stochastic jumps and time-delays,” Signal Processing, vol. 97, pp. 252–261, 2014. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. Wei, J. Qiu, H. R. Karimi, and M. Wang, “Filtering design for two-dimensional Markovian jump systems with state-delays and deficient mode information,” Information Sciences, vol. 269, pp. 316–331, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. H. Dong, Z. Wang, D. W. Ho, and H. Gao, “Robust H filtering for Markovian jump systems with randomly occurring nonlinearities and sensor saturation: the finite-horizon case,” IEEE Transactions on Signal Processing, vol. 59, no. 7, pp. 3048–3057, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. Y. Ji, H. J. Chizeck, X. Feng, and K. A. Loparo, “Stability and control of discrete-time jump linear systems,” Control Theory and Advanced Technology, vol. 7, no. 2, pp. 247–270, 1991. View at Google Scholar · View at MathSciNet · View at Scopus
  13. X. Feng, K. A. Loparo, Y. Ji, and H. J. Chizeck, “Stochastic stability properties of jump linear systems,” IEEE Transactions on Automatic Control, vol. 37, no. 1, pp. 38–53, 1992. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. Z. G. Li, Y. C. Soh, and C. Y. Wen, “Sufficient conditions for almost sure stability of jump linear systems,” IEEE Transactions on Automatic Control, vol. 45, no. 7, pp. 1325–1329, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. Y. Fang and K. A. Loparo, “On the relationship between the sample path and moment Lyapunov exponents for jump linear systems,” IEEE Transactions on Automatic Control, vol. 47, no. 9, pp. 1556–1560, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. F. Kozin, “A survey of stability of stochastic systems,” Automatica, vol. 5, pp. 95–112, 1969. View at Google Scholar · View at MathSciNet · View at Scopus
  17. Q. X. Zhu and J. Cao, “Stability analysis of markovian jump stochastic BAM neural networks with impulse control and mixed time delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 3, pp. 467–479, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. R. Isaacs, Differential Games, John Wiley & Sons, New York, NY, USA, 1965. View at MathSciNet
  19. A. A. Stoorvogel, “The singular zero-sum differential game with stability using H control theory,” Mathematics of Control, Signals, and Systems, vol. 4, no. 2, pp. 121–138, 1991. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. V. Turetsky, “Differential game solubility condition in H optimization,” Nonsmooth and Discondinuous Problems of Control and Optimization, pp. 209–214, 1998. View at Google Scholar
  21. Z. Wu and Z. Y. Yu, “Linear quadratic nonzero-sum differential games with random jumps,” Applied Mathematics and Mechanics, vol. 26, no. 8, pp. 1034–1039, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  22. X.-H. Nian, “Suboptimal strategies of linear quadratic closed-loop differential games: an BMI approach,” Acta Automatica Sinica, vol. 31, no. 2, pp. 216–222, 2005. View at Google Scholar · View at MathSciNet · View at Scopus
  23. J. Yong, “A leader-follower stochastic linear quadratic differential game,” SIAM Journal on Control and Optimization, vol. 41, no. 4, pp. 1015–1041, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. H. Y. Sun, M. Li, and W. H. Zhang, “Linear-quadratic stochastic differential game: infinite-time case,” ICIC Express Letters, vol. 5, no. 4, pp. 1449–1454, 2011. View at Google Scholar · View at Scopus
  25. H. Sun, L. Jiang, and W. Zhang, “Feedback control on nash equilibrium for discrete-time stochastic systems with markovian jumps: finite-horizon case,” International Journal of Control, Automation and Systems, vol. 10, no. 5, pp. 940–946, 2012. View at Publisher · View at Google Scholar · View at Scopus
  26. H. Y. Sun, C. Y. Feng, and L. Y. Jiang, “Linear quadratic differential games for discrete-times Markovian jump stochastic linear systems: infinite-horizon case,” in Proceedings of the 30th Chinese Control Conference (CCC '11), pp. 1983–1986, Yantai, China, July 2011. View at Scopus
  27. J. B. do Val, C. Nespoli, and Y. R. Cáceres, “Stochastic stability for Markovian jump linear systems associated with a finite number of jump times,” Journal of Mathematical Analysis and Applications, vol. 285, no. 2, pp. 551–563, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. W. H. Zhang, Y. L. Huang, and H. S. Zhang, “Stochastic H2/H control for discrete-time systems with state and disturbance dependent noise,” Automatica, vol. 43, no. 3, pp. 513–521, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. T. Hou, Stability and robust H2/H control for discrete-time Markov jump systems [Ph.D. dissertation], Shandong University of Science and Technology, Qingdao, China, 2010.
  30. W. H. Zhang and C. Tan, “On detectability and observability of discrete-time stochastic Markov jump systems with state-dependent noise,” Asian Journal of Control, vol. 15, no. 5, pp. 1366–1375, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus