Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2018, Article ID 1586846, 9 pages
https://doi.org/10.1155/2018/1586846
Research Article

Stability Analysis for a Class of Discrete-Time Nonhomogeneous Markov Jump Systems with Multiplicative Noises

1College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
2Institute of Complexity Science, Qingdao University, Qingdao 266071, China
3Department of Electrical Engineering, Lakehead University, Thunder Bay, ON, Canada P7B 5E1
4College of Information and Electrical Engineering, Shandong Jianzhu University, Jinan 250101, China

Correspondence should be addressed to Shaowei Zhou; moc.361@5769wsz

Received 12 July 2017; Revised 17 December 2017; Accepted 9 January 2018; Published 25 February 2018

Academic Editor: Michele Scarpiniti

Copyright © 2018 Shaowei Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper is concerned with a class of discrete-time nonhomogeneous Markov jump systems with multiplicative noises and time-varying transition probability matrices which are valued on a convex polytope. The stochastic stability and finite-time stability are considered. Some stability criteria including infinite matrix inequalities are obtained by parameter-dependent Lyapunov function. Furthermore, infinite matrix inequalities are converted into finite linear matrix inequalities (LMIs) via a set of slack matrices. Finally, two numerical examples are given to demonstrate the validity of the proposed theoretical methods.

1. Introduction

There are a lot of dynamic systems which are subjected to abrupt variations in parameters caused by external surroundings or internal structure in practical engineering field. Markov jump system models have attracted more and more attention because they effectively described this kind of systems and have become a hot topic in control theory. Many mature and systematic results have been obtained [15].

It should be pointed out that most of the current research is done in the framework of homogeneous Markov process (or Markov chain), which is under the assumption that the transition rate (or transition probability) matrix of the system at any time is the same. In fact, due to the impact of the objective environment, this assumption is difficult to satisfy. For example, in a Markov jump networked control system, the transition probability is time-variant because the packet dropouts and network delays are different at different periods. Another typical example can be found in the fault-prone systems, where Markov process is used to describe the failure rate that is influenced by the factors of the age and the usage rate. Obviously, it is not a homogeneous process. Driven by these practical problems, people turn to the nonhomogeneous Markov jump systems [621].

In order to describe the nonhomogeneous property, several assumptions are put forward. In [6], the time-varying transition probabilities of discrete-time Markov jump systems are considered to be finite piecewise homogeneous with two types of variations in the finite set: arbitrary variation and stochastic variation, which implies that the transition probabilities are varying but invariant within an interval. The estimation problem is investigated. This assumption is generalized to the continuous-time Markov jump system [7], Markov jump neural networks [8, 9], complex networks [10], and singular Markov jump systems [11], and the robust stability, stochastic stability, passivity analysis, and synchronization are studied, respectively. Another way of describing time-varying characteristics is in a polytopic sense. It is proposed for the first time in [12] and a sufficient condition for stochastic stability is derived by using a parameter-dependent stochastic Lyapunov function. The main idea is that the transition probability matrix is assumed to be valued in a polytope convex set with some given vertices when the exact transition probability is not known. The polytopic model is more general and includes the piecewise homogeneous Markov jump system model with arbitrary switch as a special case. Subsequently, many new results are obtained on this model. In [1315], mode-dependent, variation-dependent, and observer-based controllers are designed to satisfy the stochastic stability and prescribed performance index. The other control problems, such as control [16] and fault detection [17], are also considered. An application to DC motors can be found in [18]. The model is used not only in discrete-time systems but also in continuous-time systems; see [19]. However, up to now, the model has not been applied to Markov jump systems with multiplicative noises by the authors’ knowledge. This kind of systems with multiplicative noises is often used in engineering practice. For example, a practical model with the control input dependent noise in CDMA systems can be found in [22]. This note aims to make an attempt to investigate the stability of such systems.

Our purpose is to address the stability analysis and stabilization controller design for discrete-time nonhomogeneous Markov jump systems with multiplicative noises and with polytopic transition probability matrices. It is well known that stability analysis is the basis for the design and synthesis in all control systems and fruitful results have been obtained; see [2330] and the references therein. In this article, two types of stability, stochastic stability (SS) in mean square sense and finite-time stability (FTS), will be considered.

The organization of this paper is as follows. In Section 2, we provide model formulation and give some definitions. Section 3 is devoted to stochastic stability and stabilization. In Section 4, the finite-time stability and stabilization will be discussed. Some numerical examples are presented in Section 5. Finally, a brief concluding remark is given in Section 6.

Notations 1. is -dimensional Euclidean space; is the transpose of matrix ; is the determinant of matrix ; : is a positive definite matrix; is the identity matrix; is the Euclidean norm of vector ; is the expectation operator; is Kronecker function; diag is a block diagonal matrix with the block diagonal elements ; () is the minimum (maximum) eigenvalue of matrix ; Cond() is the condition number of matrix ; is the symmetric part of a symmetric matrix; , , and .

2. Problem Formulation

Consider the following discrete-time stochastic Markov jump system with multiplicative noises:And the corresponding controlled system is as follows:where and are the system state and control input, respectively. is the initial state and , , , and are real coefficient matrices with the appropriate dimension. is a sequence of real random variables defined on a complete probability space with and . The jump process is described by a discrete-time nonhomogeneous Markov chain which takes values in a finite set . The transition probability is which means the probability from mode at time to mode at time and satisfies , . The transition probability matrix is . It is assumed that is time-varying and takes values in a polytope with its vertices ; that is,where are given matrices and the entries of are written by . is independent of .

To be convenient, we denote the coefficient matrices associated with as .

In this paper, we mainly formulate some conditions of SS and FTS. They are two different and independent stability concepts: SS describes the asymptotic behavior of the systems in infinite time domain, while FTS reflects transient performance of the systems in finite-time interval. Now, let us recall the definitions.

Definition 2 (see [1]). System (1) is called stochastically stable if for any initial state and initial mode ,System (2) is called stochastically stabilizable if there exist a sequence of feedback controls such that for any initial state and initial mode , the closed-loop systemis stochastically stable.

Definition 3 (see [23]). System (1) is called finite-time stable with respect to , ifwhere is a given matrix, , and are the given real numbers. System (2) is said to be finite-time stabilizable if there exist a sequence of feedback controls such that the closed-loop system (5) is finite-time stable.

3. Stochastic Stability and Stabilization

In this section, some sufficient conditions will be provided for the stochastic stability and stabilization of systems (1) and (2).

Theorem 4. If there exist a set of matrices , such that for every and , ,where , , , and , then system (1) is stochastically stable.

Proof. Let and be the -algebra generated by . Then, So we have obviously, ; thus Hence, reducing to Therefore, system (1) is stochastically stable from Definition 2. The proof is completed.

Remark 5. Theorem 4 develops a sufficient condition for SS of system (1) when transition probability matrix is arbitrarily valued in a polytope with given vertices. For finite Markov jump switching systems, the stochastic stability is equivalent to asymptotic mean square stability (AMSS) [1]. So Theorem 4 is also a sufficient condition for AMSS.

Remark 6. A convex parameter space method was used in control design of uncertain linear system without jumping. State feedback conditions can be easily derived by exploiting the change of matrix variable due to Geromel and coworkers [31].

It is hard to verify the conditions in Theorem 4 because there are infinite matrix inequalities. Hence it has more theoretical significance than its practical significance. To be solvable, we introduce a set of slack matrices and convert the infinite matrix inequalities into finite LMIs in the following theorem.

Theorem 7. The following two statements are equivalent:
(1) There exist a set of matrices , , and , such that (7) holds for every and , , , satisfying , , , and .
(2) There exist a set of matrices and matrices , such thatwhere and .
The proof is similar to Proposition   in [12], so it is omitted.

From Theorem 7, if are replaced with , respectively, it is easy to get the following stabilization condition and controller design method.

Corollary 8. If there exist a set of matrices and matrices , such that the LMIshold, then system (2) is stochastically stabilized by with the feedback gains .

4. Finite-Time Stability and Control

In this section, we will consider the finite-time stability. Firstly a theoretical criterion is given.

Theorem 9. If there exist a scalar and matrices , , such thatwhere , , , and , then system (1) is finite-time stable with respect to .

Proof. Let . Applying the same procedure as the proof of Theorem 4, we can get It follows from (15) that ; that is, While soHence, from Definition 3, system (1) is finite-time stable with respect to . The proof is completed.

Next, we convert the infinite matrix inequalities (15) and (16) into finite matrix inequalities by use of slack matrices and properties of the condition number and design a state feedback controller to make system (2) finite-time stabilizable.

Theorem 10. (1) If there exist a scalar , matrices , and matrices , such thatwhere and , then system (1) is finite-time stable.
(2) If there exist a scalar , matrices , and matrices , such thatand (22) hold, then system (2) is finite-time stabilized by with the feedback gains .

Proof. (1) The proof can be divided into two parts. Firstly, we prove that (15) holds if (21) holds. We adopt the method used in Proposition of [12].
From (21), . As is positive definite, must be positive definite. While , it follows that is invertible. On the other hand, which means that . Then,which is equivalent to

Because of the invertibility of and , we have

Let ; then (27) can be rewritten aswhere . By Schur complement, (28) is equivalent to

Obviously, (15) holds after multiplying (29) by the corresponding coefficients and adding them.

Next, we prove that (16) holds if (22) is true. Before proving this, we need to recall two properties of the condition number:(a)Cond, ;(b)Cond.

It follows from (a) and (b) that

When (22) holds and , one has

(2) This can be easily proved by the same procedure as .

Remark 11. We should point out that unlike the equivalence of (7) and (13), (21) is just a sufficient condition of (15) due to the existence of .

Remark 12. In fact, (22) can be guaranteed by the following LMIs:So we can obtain a set of feasible feedback controls by solving (23) and (32) or by solving (23) and checking if they satisfy (22).

5. Numerical Examples

In this section, two illustrative examples are presented to show the effectiveness of the proposed main results.

Example 1. Consider system (2) with coefficient matrices as follows:The initial state is given as . The vertices of the polytope transition probability matrices are given as follows: By solving LMIs (14), it can be concluded that system (2) is stochastically stabilized via feedback gains The system modes and the state trajectories of the open-loop and closed-loop systems are, respectively, shown in Figures 13.

Figure 1: The system modes for Example 1.
Figure 2: The states of open-loop system for Example 1.
Figure 3: The states of closed-loop system for Example 1.

Example 2. Consider system (2) with coefficient matrices as follows: The initial state is given as and the vertices of the polytope transition probability matrices are given as follows:Given , , , and and choosing , by solving LMIs (23) and checking (22), it can be concluded that system (2) is finite-time stabilized via feedback gainsThe system modes and the state trajectories of the open-loop and closed-loop systems are, respectively, shown in Figures 46.

Figure 4: The system modes for Example 2.
Figure 5: of the open-loop system for Example 2.
Figure 6: of the closed-loop system for Example 2.

6. Conclusions

In this paper, we have investigated the stochastic stability and finite-time stability of a kind of discrete-time nonhomogeneous Markov jump systems with multiplicative noises and polytopic transition probability matrices. Some sufficient conditions for stability are proposed by parameter-dependent Lyapunov function and stabilization controllers are designed by using LMI toolbox. The simulation results show the effectiveness of the developed techniques. This research will motivate us to study the continuous-time version in future.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61473160, 61503224), the Special Funds for Postdoctoral Innovative Projects of Shandong Province (201403009), the Research Award Funds for Outstanding Young Scientists of Shandong Province (BS2014SF005), and SDUST Research Fund (2015TDJH105).

References

  1. O. L. V. Costa, M. D. Fragoso, and R. P. Marques, Discrete-Time Markov Jump Linear Systems, Springer, London, UK, 2005. View at MathSciNet
  2. O. L. V. Costa, M. D. Fragoso, and M. G. Todorov, Continuous-Time Markov Jump Linear Systems, Springer, Berlin, Germany, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  3. L. Zhang, T. Yang, P. Shi, and Y. Zhu, Analysis and Design of Markov Jump Systems with Complex Transition Probabilities, Springer, Switzerland, 2016.
  4. X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, London, UK, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  5. Y. Fang, Stability analysis of linear control systems with uncertain parameters, Case Western Reserve University, Cleveland, OH, USA, 1994. View at MathSciNet
  6. L. X. Zhang, “H estimation for discrete-time piecewise homogeneous Markov jump linear systems,” Automatica, vol. 45, no. 11, pp. 2570–2576, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  7. M. Faraji-Niri, M. R. Jahed-Motlagh, and M. Barkhordari-Yazdi, “Robust stabilization of uncertain non-homogeneous Markov jump linear systems,” in Proceedings of the 3rd International Conference on Control, Instrumentation, and Automation, pp. 42–46, Tehran, Iran, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. Z.-G. Wu, J. H. Park, H. Su, and J. Chu, “Stochastic stability analysis of piecewise homogeneous Markovian jump neural networks with mixed time-delays,” Journal of The Franklin Institute, vol. 349, no. 6, pp. 2136–2150, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. Z.-G. Wu, J. H. Park, H. Su, and J. Chu, “Passivity analysis of Markov jump neural networks with mixed time-delays and piecewise-constant transition rates,” Nonlinear Analysis: Real World Applications, vol. 13, no. 5, pp. 2423–2431, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. Z.-X. Li, J. H. Park, and Z.-G. Wu, “Synchronization of complex networks with nonhomogeneous Markov jump topology,” Nonlinear Dynamics, vol. 74, no. 1-2, pp. 65–75, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. Z.-G. Wu, J. H. Park, H. Su, and J. Chu, “Stochastic stability analysis for discrete-time singular Markov jump systems with time-varying delay and piecewise-constant transition probabilities,” Journal of The Franklin Institute, vol. 349, no. 9, pp. 2889–2902, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. S. Aberkane, “Stochastic stabilization of a class of nonhomogeneous Markovian jump linear systems,” Systems & Control Letters, vol. 60, no. 3, pp. 156–160, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  13. Y. Yin, P. Shi, F. Liu, and K. L. Teo, “Filtering for discrete-time nonhomogeneous Markov jump systems with uncertainties,” Information Sciences, vol. 259, pp. 118–127, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. Y. Yin, P. Shi, F. Liu, and K. L. Teo, “Observer-based H control on nonhomogeneous discrete-time markov jump systems,” Journal of Dynamic Systems, Measurement, and Control, vol. 135, no. 4, Article ID 041016, 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. P. Shi, Y. Yin, F. Liu, and J. Zhang, “Robust control on saturated Markov jump systems with missing information,” Information Sciences, vol. 265, pp. 123–138, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. Y. Zhang, Y. Ou, Y. Zhou, X. Wu, and W. Sheng, “Observer-based l2l control for discrete-time nonhomogeneous Markov jump Lur'e systems with sensor saturations,” Neurocomputing, vol. 162, pp. 141–149, 2015. View at Publisher · View at Google Scholar · View at Scopus
  17. Y. Long and G.-H. Yang, “Fault detection for a class of nonhomogeneous Markov jump systems based on delta operator approach,” Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, vol. 227, no. 1, pp. 129–141, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. Y. Yin, P. Shi, F. Liu, and C. C. Lim, “Robust control for nonhomogeneous Markov jump processes: an application to {DC} motor device,” Journal of The Franklin Institute, vol. 351, no. 6, pp. 3322–3338, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. Y. Ding and H. Liu, “Stability analysis of continuous-time Markovian jump time-delay systems with time-varying transition rates,” Journal of The Franklin Institute, vol. 353, no. 11, pp. 2418–2430, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. S. Aberkane, “Bounded real lemma for nonhomogeneous Markovian jump linear systems,” Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 58, no. 3, pp. 797–801, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. T. Hou, H. Ma, and W. Zhang, “Spectral tests for observability and detectability of periodic Markov jump systems with nonhomogeneous Markov chain,” Automatica, vol. 63, pp. 175–181, 2016. View at Publisher · View at Google Scholar · View at Scopus
  22. L. Qian and Z. Gajic, “Variance minimization stochastic power control in CDMA systems,” in Proceedings of the IEEE International Conference on Communications, vol. 3, pp. 1763–1767, New York, NY, USA, 2002. View at Publisher · View at Google Scholar · View at Scopus
  23. F. Amato and M. Ariola, “Finite-time control of discrete-time linear systems,” Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 50, no. 5, pp. 724–729, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. F. Amato, R. Ambrosino, M. Ariola, C. Cosentino, and G. De Tommasi, Finite-time stability and control, Springer, London, UK, 2014. View at MathSciNet
  25. F. Amato, M. Ariola, and C. Cosentino, “Robust finite-time stabilisation of uncertain linear systems,” International Journal of Control, vol. 84, no. 12, pp. 2117–2127, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. F. Tan, B. Zhou, and G.-R. Duan, “Finite-time stabilization of linear time-varying systems by piecewise constant feedback,” Automatica, vol. 68, pp. 277–285, 2016. View at Publisher · View at Google Scholar · View at Scopus
  27. X. Luan, F. Liu, and P. Shi, “Observer-based finite-time stabilization for extended Markov jump systems,” Asian Journal of Control, vol. 13, no. 6, pp. 925–935, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. Z. Yan, W. Zhang, and G. Zhang, “Finite-time stability and stabilization of Ito stochastic systems with Markovian switching: mode-dependent parameter approach,” Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 60, no. 9, pp. 2428–2433, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  29. W. Zhang and X. An, “Finite-time control of linear stochastic systems,” International Journal of Innovative Computing, Information & Control, vol. 4, pp. 689–696, 2008. View at Google Scholar
  30. F. Amato, M. Ariola, and C. Cosentino, “Finite-time control of discrete-time linear systems: Analysis and design conditions,” Automatica, vol. 46, no. 5, pp. 919–924, 2010. View at Publisher · View at Google Scholar · View at Scopus
  31. J. C. Geromel, P. L. Peres, and J. Bernussou, “On a convex parameter space method for linear control design of uncertain systems,” SIAM Journal on Control and Optimization, vol. 29, no. 2, pp. 381–402, 1991. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus