Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 382756, 7 pages
http://dx.doi.org/10.1155/2015/382756
Research Article

Control Design of Detectable Periodic Markov Jump Systems

1College of Mathematics and Systems Science, State Key Laboratory of Mining Disaster Prevention and Control Co-Founded by Shandong Province and the Ministry of Science and Technology, Shandong University of Science and Technology, Qingdao 266590, China
2College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China

Received 22 May 2015; Accepted 6 September 2015

Academic Editor: Leonid Shaikhet

Copyright © 2015 Ting Hou and Hongji Ma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An infinite horizon control problem is addressed for discrete-time periodic Markov jump systems with -dependent noise. Above all, by use of the spectral criterion of detectability, an extended Lyapunov stability theorem is developed for the concerned dynamics. Further, based on a game theoretic approach, a state-feedback control design is proposed. It is shown that under the condition of detectability feedback gain can be constructed through the solution of a group of coupled periodic difference equations.

1. Introduction

control has been one of the most active areas of modern control theory since the 1970s. Owing to the introduction of state-space approach [1], many researchers have been inspired to extend the deterministic control theory to various stochastic systems; see [25]. In the development of stochastic theory, [2] can be regarded as a pioneering work, which firstly established a stochastic version of bounded real lemma for linear Itô-type differential systems. Besides, initialed from [6], considerable progress has been made in the study of stochastic control. By combing index with an quadratic cost performance, the resulting multiobjective control strategy is more attractive than the sole control in engineering applications.

The main objective of this paper is to settle an infinite horizon control problem for periodic Markov jump systems with multiplicative noises. By now, Markov jump systems have been extensively investigated [79]. For example, stochastic and robust stability have been elaborately discussed in [10, 11] for networked dynamics with Markovian jump. As concerns theory, an estimation problem was tackled in [12] for a class of discrete homogeneous Markov jump systems. On the other hand, an infinite horizon control problem was handled in [13] for nonlinear Itô systems with homogeneous Markov process. However, few results have been reported for control of periodic Markov jump systems. To some extent, this study will generalize the work of [14] to the case of periodically time-varying coefficients and transition probabilities, as in [1517].

The remainder of this paper is organized as follows. Section 2 gives basic preliminaries and problem formulations. In Section 3, the intrinsic relationship between asymptotic mean square stability and detectability is addressed. As a result, a Barbashin-Krasovskii-type theorem is established for periodic Markov jump systems with state-dependent noises. Section 4 contains an internally stabilizing control design, which can not only fulfill the prescribed disturbance attenuation level, but also minimize the output energy. To verify the effectiveness of the proposed approach, a numerical example is supplied in Section 5. Finally, Section 6 concludes this paper with a concluding remark.

Notations. () is -dimensional real (complex) space with the usual Euclidean norm ; is the space of all real matrices with the operator norm ; is the set of all symmetric matrices whose entries may be complex; : is a positive (semi)definite matrix; is the transpose of a matrix (vector) ; is the identity matrix; and ; ; is the operation of Kronecker product; is the kernel of a matrix; is a (block) diagonal matrix.

2. Preliminaries

On a complete probability space , we consider the following discrete-time periodic Markov jump system with -dependent noise: where , , , and denote the system state, control input, exogenous disturbance, and measurement output, respectively. Assume that is a sequence of independent random vectors such that and ( is a Kronecker function). The Markov chain takes values in with a nondegenerate transition probability matrix and the initial distribution for all . As usual, we set that is independent of the stochastic process and its mode is measurable in real time. Moreover, the coefficients of (1) are -periodic (e.g., ) and the transition probability of satisfies , where . Let be -algebra generated by . In the case of ,  . Denote by the space of -valued, nonanticipative square summable stochastic processes which are -measurable for all and . It is clear that is a real Hilbert space with the norm induced by the usual inner product: .

Definition 1 (see [18]). The zero-state equilibrium of discrete-time periodic Markov jump systemsor is called asymptotically mean square stable (AMSS) if for all and . Here, is the state of (2) corresponding to the initial state and . Moreover, if there exists -periodic sequence such that the zero-state equilibrium of the closed-loop systemis AMSS for any , then is called stochastically stabilizable and is called a stabilizing feedback.

By Theorem 3.10 [18], we know that system (2) is asymptotically mean square stable if and only if it is exponentially mean square stable.

Definition 2 (see [19]). The periodic Markov jump system with measurement equation or is called (uniformly) detectable if for any , , and , there holds

In this paper, we will deal with the infinite horizon optimal control problem about (1). More specifically, for a prescribed disturbance attenuation level , we aim to find a linear, memoryless, periodic state-feedback controller such that [20](i)when , the closed-loop state of (1) corresponding to is AMSS;(ii)the -induced norm of satisfies , where is the perturbation operator defined by ; it is notable that is the output of (1) corresponding to and , while is arbitrary random disturbance;(iii)when the worst-case disturbance , if existing, is imposed on (1), minimizes the corresponding output energy .

3. Stability and Detectability

In this section, we will focus on the detectability of periodic Markov jump system (1). This structural property will play an essential role in the treatment of control problem. Firstly, we present several instrumental operators.

Let (resp., ) indicate the set of all sequences with (resp., ). Thus, is a Hilbert space with the inner product:

Given , let be a Lyapunov operator defined as , whereThen, associated with inner product (6), the adjoint operator of is given by :

In terms of , we can construct a causal evolution ; when , (i.e., the identity operator).

To proceed, let us introduce the following two linear operators (cf. [14, 21]):where and is the entry of . It is easy to verify that and are both invertible and satisfy where is a constant matrix of full column rank andIn (11), is called the induced matrix of and (if for , then ). Repeating the above steps, the induced matrix of is realized to be . Particularly, the induced matrix of is denoted by .

Next, we will give two useful lemmas, which have been shown in [19].

Lemma 3. is AMSS if and only if , where denotes the spectral set of an operator (or a matrix) and .

Lemma 4. is detectable if and only if for some , there does not exist any nonzero such that

We are prepared to establish the following Barbashin-Krasovskii stability criterion for (4).

Theorem 5. If is detectable, then is AMSS if and only if the PLEhas a unique -periodic solution .

Proof. By Theorem 2.5 [18], if is AMSS, then the PLE (13) admits a unique -periodic solution . Next, we will show the converse assertion. If (13) has -periodic solution but is not AMSS, by Lemma 3, there must exist with . Denote by the spectral radius of ; then . According to the Krein-Rutman theorem, there is a positive definite such that . Since is detectable, by Lemma 4, for some , there exists at least one such thatThus, for inner product (6), it can be computed from (13) thatDue to the periodicity of , (15) leads to the fact that which implies for and . That is, for , which contradicts (14). Hence, is AMSS.

Remark 6. In [18], a similar result has been proven under the condition of stochastic detectability. According to [19], (uniform) detectability is a weaker prerequisite than stochastic detectability. Therefore, Theorem 5 has improved the result of Theorem 4.1 [18] within the concerned framework.

4. Control

In this section, a game theoretic approach will be employed to deal with the infinite horizon control problem of (1). Under the assumption of detectability, a necessary and sufficient condition can be provided for the existence of controller.

Theorem 7. For system (1), if the following coupled periodic difference equations (CPDEs) admit -periodic quaternion solution , ; , on ,where and , are detectable, then the state-feedback control is given by .
Conversely, if is detectable and the control problem about (1) is solved by , then CPDEs (17)–(20) admit a unique -periodic quaternion solution , ; , on .

Proof. ”: (a) Let us first show that stabilizes system (1) internally . To this end, we rewrite (19) as follows:where Since is detectable, by Lemma 4, we can prove that is also detectable. From (22) and Theorem 5, it follows that is AMSS, which means and . Similarly, we can prove that is also AMSS. Hence, can stabilize system (1) internally.
(b) Consider the following: . Implementing into system (1), we getNoting that is AMSS, is a stabilizing solution of (17). By use of Theorem 1 [20], we deduce that system (24) satisfies .
(c) minimizes the performance . By (17) and (24), we can complete square as follows:which implies that is the worst-case disturbance associated with . Applying to system (1), we haveIt remains to show that fulfills the following optimal index:which is an LQ optimal control problem. Since (19) is equivalent to (22) and is detectable, by Theorem 5, is a stabilizing solution of (19). Making use of Proposition 6.3 [18], we arrive atwhere . This justifies the sufficiency statement.
”: Assume that and solve the considered control problem. Thus, stabilizes system (1) internally and . By Theorem 1 [20], we conclude that (17) admits a stabilizing solution , which implies that is AMSS. Since system (24) is internally stable, by Corollary 3.9 [18], we deduce that for any . As shown in the sufficient part, by use of (17) and (24), we will come to (25), which indicates . Further, imposing on system (1) gives (26). Because is the optimal control, solves LQ control problem (27). Moreover, from detectability of , we have that is detectable. Recalling that is AMSS, by Theorem 5, (22) (i.e., (19)) has a stabilizing solution . Finally, by completing square in terms of (19) and (26), we obtain (28), which justifies that . This ends the proof.

Remark 8. If the coefficients of (1) reduce to be time-invariant and the Markov chain is homogeneous, then Theorem 7 is reduced to the conclusion of Theorem 3 [14]. Hence, the current study can be regarded as a periodic extension of [14]. At present, there still exists some difficulty in generalizing the above method to design controller for time-varying Markov jump systems as in [22]. To this end, a time-varying version of PBH criterion has to be developed.

5. Numerical Example

Consider the following two-dimensional Markov jump system with the periodic coefficients listed as follows: Moreover, and . It is clear that . The transition probability matrix is determined by , , , and . For a prescribed disturbance attenuation level , by use of the Runge-Kutta algorithm, we can solve CPDEs (17)–(20) and get the feedback gains of : By Lemma 4, it can be verified that and are both detectable. Applying to the periodic Markov jump system, we get the closed-loop state trajectory and corresponding performance. Figure 1(a) has displayed 50 sampled state trajectories originating from , while Figure 1(b) demonstrates the cumulative energy of the system output.

Figure 1: State responses and performance.

6. Conclusion

In this paper, an infinite horizon control problem has been settled for discrete-time periodic Markov jump systems with multiplicative noise. Under the condition of (uniform) detectability, a game theoretic control is produced by solving a group of CPDEs. Note that there remain some open topics on this issue. For example, it is interesting as well as challenging to investigate the control problem with input or output saturation constraint [23], which no doubt deserves a further study.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (no. 61304074), the Research Award Fund for Outstanding Young Scientists of Shandong Province (nos. BS2013 DX009 and BS2013DX012), the Research Fund for the Taishan Scholar Project of Shandong Province, the SDUST Research Fund (no. 2014JQJH103), and the Shandong Joint Innovative Center for Safe and Effective Mining Technology and Equipment of Coal Resources.

References

  1. J. C. Doyle, K. Glover, P. P. Khargonekar, and B. A. Francis, “State-space solutions to standard H2 and H control problems,” IEEE Transactions on Automatic Control, vol. 34, no. 8, pp. 831–847, 1989. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. D. Hinrichsen and A. J. Pritchard, “Stochastic H,” SIAM Journal on Control and Optimization, vol. 36, no. 5, pp. 1504–1538, 1998. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. A. El Bouhtouri, D. Hinrichsen, and A. J. Pritchard, “H-type control for discrete-time stochastic systems,” International Journal of Robust and Nonlinear Control, vol. 9, no. 13, pp. 923–948, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. E. Gershon and U. Shaked, “H output-feedback control of discrete-time systems with state-multiplicative noise,” Automatica, vol. 44, no. 2, pp. 574–579, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  5. Z. Wang, D. W. C. Ho, H. Dong, and H. Gao, “Robust H finite horizon control for a class of stochastic nonlinear timevarying systems subject to sensor and actuator saturations,” IEEE Transactions on Automatic Control, vol. 55, no. 7, pp. 1716–1722, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. B.-S. Chen and W. Zhang, “Stochastic H2/H control with state-dependent noise,” IEEE Transactions on Automatic Control, vol. 49, no. 1, pp. 45–57, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  7. O. L. V. Costa, M. D. Fragoso, and M. G. Todorov, Continuous-Time Markov Jump Linear Systems, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  8. Q. Zhu and J. Cao, “Stability analysis of markovian jump stochastic BAM neural networks with impulse control and mixed time delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 3, pp. 467–479, 2012. View at Publisher · View at Google Scholar · View at Scopus
  9. Q. Zhu, “pth moment exponential stability of impulsive stochastic functional differential equations with Markovian switching,” Journal of the Franklin Institute: Engineering and Applied Mathematics, vol. 351, no. 7, pp. 3965–3986, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. Q. Zhu, J. Cao, T. Hayat, and F. Alsaadi, “Robust stability of Markovian jump stochastic neural networks with time delays in the leakage terms,” Neural Processing Letters, vol. 41, no. 1, pp. 1–27, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. Q. Zhu, R. Rakkiyappan, and A. Chandrasekar, “Stochastic stability of Markovian jump BAM neural networks with leakage delays and impulse control,” Neurocomputing, vol. 136, pp. 136–151, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. L. Zhang, “H estimation for discrete-time piecewise homogeneous Markov jump linear systems,” Automatica, vol. 45, no. 11, pp. 2570–2576, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. Z. Lin, Y. Lin, and W. Zhang, “A unified design for state and output feedback H control of nonlinear stochastic Markov jump systems with state and disturbance-dependent noise,” Automatica, vol. 45, no. 12, pp. 2955–2962, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. T. Hou, W. Zhang, and H. Ma, “Infinite horizon H2/H optimal control for discrete-time Markov jump systems with (x,u,v)-dependent noise,” Journal of Global Optimization, vol. 57, no. 4, pp. 1245–1262, 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Aberkane and V. Dragan, “H filtering of periodic Markovian jump systems: application to filtering with communication constraints,” Automatica, vol. 48, no. 12, pp. 3151–3156, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. V. Dragan, T. Morozan, and A.-M. Stoica, “Output-based H2 optimal controllers for a class of discrete-time stochastic linear systems with periodic coefficients,” International Journal of Robust and Nonlinear Control, vol. 25, no. 13, pp. 1897–1926, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. T. Morozan and V. Dragan, “An H2-type norm of a discrete-time linear sotchastic systems with periodic coefficients simultaneously affected by an infinite Markov chain and multiplicative white noise perterbations,” Stochastic Analysis and Applications, vol. 32, no. 5, pp. 776–801, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. V. Dragan, T. Morozan, and A.-M. Stoica, Mathematical Methods in Robust Control of Discrete-Time Linear Stochastic Systems, Springer, New York, NY, USA, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. T. Hou, H. Ma, and W. Zhang, “Spectral tests for observability and detectability of periodic Markov jump systems with nonhomogeneous Markov chain,” Automatica, In press.
  20. H. Ma, W. Zhang, and T. Hou, “Infinite horizon H2/H control for discrete-time time-varying Markov jump systems with multiplicative noise,” Automatica, vol. 48, no. 7, pp. 1447–1454, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. W. Zhang and B.-S. Chen, “cal H-representation and applications to generalized Lyapunov equations and linear stochastic systems,” IEEE Transactions on Automatic Control, vol. 57, no. 12, pp. 3009–3022, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. V. Dragan and T. Morozan, “Stability and robust stabilization to linear stochastic systems described by differential equations with Markovian jumping and multiplicative white noise,” Stochastic Analysis and Applications, vol. 20, no. 1, pp. 33–92, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  23. G. Wei, Z. Wang, H. Shu, and J. Fang, “A delay-dependent approach to H filtering for stochastic delayed jumping systems with sensor non-linearities,” International Journal of Control, vol. 80, no. 6, pp. 885–897, 2007. View at Publisher · View at Google Scholar · View at MathSciNet