About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 963089, 13 pages
http://dx.doi.org/10.1155/2013/963089
Research Article

LMI-Based Model Predictive Control for a Class of Constrained Uncertain Fuzzy Markov Jump Systems

1Research Center of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, Heilongjiang 150080, China
2Department of Mechanical and Aerospace Engineering, North Carolina State University, Raleigh, NC 27606, USA
3Department of Engineering, Faculty of Engineering and Science, University of Agder, 4898 Grimstad, Norway

Received 8 August 2013; Accepted 27 September 2013

Academic Editor: Jun Hu

Copyright © 2013 Ting Yang and Hamid Reza Karimi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An extended model predictive control algorithm is proposed to address constrained robust model predictive control. New upper bounds on arbitrarily long time intervals are derived by introducing two external parameters, which can relax the requirements for the increments of the Lyapunov function. The main merit of this new approach compared to other well-known techniques is the reduced conservativeness. The proposed method is proved to be effective for a class of uncertain fuzzy Markov jump systems with partially unknown transition probabilities. A single pendulum example is given to illustrate the advantages and effectiveness of the proposed controller design method.

1. Introduction

Model predictive control (MPC), a powerful strategy for dealing with input and state constraints for a control system, has first attracted notable attention in industrial applications [1]. Since stability analysis approach for MPC was proposed, many significant advances in understanding MPC from a control theoretician’s viewpoint have then been acquired [24]. Meanwhile, the MPC formulation for constrained linear systems has been naturally extended to not only nonlinear systems [57] but also some more complex systems, for example, stochastic systems [8, 9], time-delay systems [10], hybrid systems [11], uncertain systems [12], and so on. By employing the idea of control structure optimization, for example, distributed MPC [13], centralized MPC, and coordinated MPC, the theory of MPC has been further refined. As is well known, for a long time, efficient setting of the large set of tunable parameters has been a hard problem for MPC. Fortunately, many available methods have already been developed (see [14] and the references therein for more details). Nowadays, many scholars tend to develop “fast MPC” in order to ease the huge online computational burden [1517]. However, it should be noticed that MPC algorithm may perform very poorly when model mismatch occurs in spite of the inherent robustness provided by the feedback strategy based on the plant measurement at the next sampling time.

Therefore, in the past decades, the issues of MPC algorithm for uncertain systems have been addressed much in the literature. A robust constrained MPC scheme considering two classes of system uncertainty, that is, polytopic paradigm and structured feedback uncertainty, has been analyzed by means of linear matrix inequalities (LMIs) by Kothare et al. [18]. As opposed to a single linear static state-feedback law in [18], Bloemen et al. [19] divided the input sequence into two parts, the first inputs being computed by finite MPC method and the other inputs being calculated based on linear static state-feedback law. Because was variable, the end-point state-weighting matrix and the invariant ellipsoid were transformed into variables in the online optimization, and thus this algorithm achieved the trade-off between feasibility and performance. This work has been improved by applying parameter-dependent Lyapunov function [2022]. In [2325], an efficient robust constrained MPC with a time-varying terminal constraint set was developed. The proposed algorithm can obtain a perfect control schema achieving lower online computation and larger stabilizable set of states while retaining the unconstrained optimal performance as much as possible. It is worth mentioning that in order to analyze the stability of the system and obtain an upper bound of cost function, the optimal cost function has been qualified as a constraint on the increments of Lyapunov function [26]. However, this constraint is too strict, since to guarantee the system stability, the increments of Lyapunov function are only required to be negative. Motivated by this, in the present paper, a modified MPC scheme in which two extra parameters are introduced is proposed based on the characteristics of convergent series in order to reduce the conservativeness.

Many real systems, such as solar thermal receivers, economic systems, and networked control systems, may experience random abrupt changes in system inputs, internal variables, and other system parameters. Uncertainties like these are best represented via stochastic models [2729], such as fuzzy Markov jump system (MJS) in which the subsystems are modeled as fuzzy systems. Based on the approximation property of the fuzzy logic systems [3032], fuzzy MJSs are developed for the nonlinear control systems with abrupt changes in their structure and parameters.

In this paper, the problem of MPC controller design for a class of uncertain fuzzy MJSs with partially unknown transition probabilities is addressed. This kind of system fits into a very wide range of practical dynamic systems combining nonlinear behaviors with changes or uncertainties of structure or parameters. Meanwhile, constraints, such as energy limitation, levels in tanks, flows in piping, and maximum of pH value, can be systematically included during the controller design process. The system concerned is much more complex than fuzzy systems with uncertainties or MJSs with partially unknown transition probabilities [33, 34], because here the uncertainties will consist of four levels (the system parameter uncertainty, the membership degree uncertainty, the mode uncertainty, and the transition probability uncertainty) and they are not mutually independent. Therefore, although the formulation seems similar, the results for fuzzy systems with uncertainties or MJSs with partially unknown transition probabilities cannot be directly used in this scenario. A comparison with the method in [20] (modified in [22]) is carried out by simulation on a single pendulum control problem. Due to LMIs’ prevalence in convex optimization problems, especially for the cases with high order matrices, and the availability of reliable general commercial solvers, the LMI algorithm is employed to deal with the underlying optimization problems in this study. The remainder of this paper is organized as follows. The mathematical model of the concerned system is formulated and some preliminaries are given in Section 2. Section 3 is devoted to deriving the results for the controller design. Numerical examples are provided in Section 4 and this paper is concluded in Section 5.

Notation. The notation used throughout the paper is fairly standard. The superscript “” stands for matrix transposition. denotes the -dimensional Euclidean space. The notation () means that is real symmetric and positive (semipositive) definite and () means (). In symmetric block matrices or complex matrix expressions, we use an asterisk to represent a term that is induced by symmetry and stands for a block-diagonal matrix. and represent identity matrix and zero matrix, respectively. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations. stands for the usual Euclidean norm.

2. Problem Formulation and Preliminaries

2.1. Uncertain Fuzzy MJSs with Partially Unknown Transition Probabilities

Consider the following time-varying discrete-time fuzzy Markov jump system. For each mode , the fuzzy model is composed of plant rules that can be represented as follows.

Plant Rule . If is ,…, and is , then where is the state vector, is the control input, is the system output, is the premise variable vector, is the number of premise variables, and , , is a fuzzy set. For simplicity, we denote by . The Markov chain , taking values in a finite set , governs switching among different system modes. The system matrices of the mode are denoted by and the set is a set of polytopes where denotes the convex hull. In other words, if , then for some nonnegative summing to one, the following holds: For the sake of simplicity, we denote by and , and by , and , respectively.

As commonly done in the literature, it is assumed that the premise variable vector does not depend on the control variables and the disturbance. The center-average defuzzification method is used as follows: where is the grade of membership of in . In what follows, we use the following notation for simplicity: .

A more compact presentation of system (1) is given by The process is described by a discrete-time homogeneous Markov chain, which takes values in the finite set with mode transition probabilities where , and . For example, the transition probability matrix (TPM) can be given by The transition probabilities described above are considered to be partially available and can be divided into two parts as In addition, if is further described as where , represents the index of the known element in the row of matrix . And we denote

The following fuzzy control law is chosen: Under the control law, the closed-loop system of is given by where .

For the fuzzy MJS, the following definition will be adopted in the rest of this paper.

Definition 1. The fuzzy MJS in (12) is said to be stochastically stable, if for any initial condition , the following inequality holds: where stands for the mathematical expectation.

The objective of this paper is to design a state-feedback model predictive controller such that the closed-loop system (12) is stochastically stable.

2.2. Model Predictive Control

Model predictive control (MPC) makes use of a receding horizon principle, which means that at each sampling time , an optimization algorithm will be applied to compute a sequence of future control signals. The used performance index depends on the predicted future states of the plant , which can be calculated through the newly obtained measurements and the predictive model. Here, we use as the state and output, respectively, of the plant at time predicted by utilizing the measurements at time ; represents the control action moves at time computed by the optimization problem (14); is the prediction horizon and is the control horizon. In what follows, let and represent and , respectively.

In this section, the problem formulation for MPC using the model (12) is discussed. The goal is to find an input sequence at each sampling time which minimizes the following performance index : where and then it can be replaced by The matrices and are symmetric weighting matrices. In particular, we will consider the case where , which means that the control horizon is equal to the prediction horizon. The input and output constraints focused on in this paper are Euclidean norm bounds and componentwise peak bounds, given, respectively, as

3. Model Predictive Control Using Linear Matrix Inequalities

In this section, the problem is solved by deriving an upper bound on the objective function based on the state-feedback control law . Therefore, via minimizing this upper bound at each sampling time , we will obtain the sequence of control input signals.

3.1. Derivation of the Upper Bound

In this paper, the following objective function is considered: where

Consider a parameter-dependent function where , , and is the system state. Assume that there exists a convergent series , such that, at each sampling time , the following inequality holds for all , , satisfying (12): , Summing (21) from to , we get where . Thus Specially, when , we will have for the robust performance objective function to be finite and hence . Summing (21) from to , we get Thus

Set where . For the robust performance objective function to be finite, we will have and ; hence, , and then . Therefore, (21) can be written as In order to guarantee the increment , is required. This gives an upper bound on the robust performance objective. Thus the goal of our robust MPC algorithm has been redefined to deduce, at each time step , a state-feedback control law to minimize this upper bound , that is to say, .

3.2. Minimization of the Upper Bound without Constraints

Theorem 2. Consider the closed-loop uncertain system (12) with the polytopic uncertainty set . Let be the state measured at sampling time . Assume that there are no constraints on the control input and plant output. Then the state-feedback matrix in the control law , that minimizes the upper bound on the robust performance objective function at sampling time is given by where and are obtained from the solution (if it exists) to the following linear objective minimization problem: with

Proof. Minimization of the upper bound means minimizing which is equivalent to Defining and using Schur complements, this is equivalent to
The parameter-dependent function is required to satisfy (27). By substituting and the state-space equations in (12), inequality (27) becomesThis is equivalent toThen, (34) is satisfied for all if where is the minimum term of . Substituting , using Schur complements, and then pre- and postmultiplying the above inequality by and , we can see that this is equivalent to with Inequality (37) is affine in . Following the logic of de Oliveira, Bernussou and Geromel [35], and Cuzzola et al. [20], (37) is satisfied for all if and only if there exist , and a positive such that with The feedback matrix is then given by

Remark 3. Based on the above analysis, we can see that the choice of control horizon can amount to the choice of series . That is, given a constant control horizon , we can construct some infinite convergent series whose corresponding series possess some properties, such as . Therefore, we will discuss neither the control horizon nor the predictive horizon which is supposed to be equal to .

Remark 4. In particular, when the control horizon , the conditions for the above derivation can be satisfied if and only if the series . Then it follows that the method is equivalent to the approach in [20] (modified in [22]).

3.3. Minimization of the Upper Bound with Input and Output Constraints

Theorem 5. Consider the closed-loop uncertain system (12) with the polytopic uncertainty set . Let be the state measured at sampling time , and , for all . The constraints on the control input and plant output in the form of (17) can be transferred into problems expressed by the following linear matrix inequalities, respectively: with ,

Proof. Following [36], we have By virtue of Schur complements, we have , if Using the similar logic in [35], (46) is proved equivalent to (42). The proof of (43)-(44), which can be achieved by an analogous argument, is omitted for the sake of brevity.

Remark 6. The obtained theorems can be easily reduced to simple situations, for example, fuzzy systems with uncertainties and Markov jump systems, and can be solved via commercial solvers such as LMITOOL, YALMIP, and GloptiPoly.

4. Numerical Simulation

In this section, we present a numerical example that clearly illustrates the improvement obtained with Theorem 2. We will compare under different situations the improved method with the approach in [20] (modified in [22]) which can be seen as and . Consider the single pendulum system: where is the angular displacement, is the angular velocity, is the control torque, is the disturbance, and and are jump parameters with values , , , , and . The angular displacement is assumed to vary in the intervals . This system can be represented as the following discrete-time fuzzy model with partly unknown transition probabilities: Set , and choose the membership functions as For the sake of simplicity, we use two T-S fuzzy rules to approximate this system.Plant Rule  1. If is about , then Plant Rule  2. If is about , then where

Our purpose here is to illustrate the advantages of the proposed method by comparing the optimal parameter for different situations. First of all, supposing , , and the initial states , the steady-state responses of the closed-loop fuzzy MJS with input constraints , are shown in Figure 1. Meanwhile, the optimal results of compared for different initial conditions are shown in Table 1. One may note that the average values of become much smaller when the additional parameters and are introduced. Then, assuming initial states , , we can obtain the corresponding values of for different listed in Table 2. In the same way, by assuming initial states , , the information about for different values of is given in Table 3. It is easy to observe from Tables 2 and 3 that the optimal performance is closely related to the two parameters.

tab1
Table 1: The value of for different initial conditions with constraints.
tab2
Table 2: The value of for different with constraints.
tab3
Table 3: The value of for different with constraints.
fig1
Figure 1: (a) State response of MJS system with and , and for case 1, , for case 2. (b) Input signals obtained based on Theorem 2.

5. Conclusion

In this paper, the problem of controller design based on MPC algorithm for uncertain systems is discussed. A relaxed scheme which has less conservativeness than traditional approaches is derived through introducing two additional parameters. Based on this scheme, a new set of criteria for model predictive controller design is obtained based on the fuzzy Markov jump system with partially unknown TPMs in an arbitrarily large horizon. A practical example is presented to show the effectiveness and applicability of the developed method. It is expected that the methods and ideas behind the paper could be extended to other systems or issues, such as filter design for the underlying system.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. E. F. Camacho, Model Predictive Control, Springer, New York, NY, USA, 1998.
  2. E. G. Gilbert and K. T. Tan, “Linear systems with state and control constraints: the theory and application of maximal output admissible sets,” IEEE Transactions on Automatic Control, vol. 36, no. 9, pp. 1008–1020, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. S. S. Keerthi and E. G. Gilbert, “Optimal infinite-horizon feedback laws for a general class of constrained discrete-time systems: stability and moving-horizon approximations,” Journal of Optimization Theory and Applications, vol. 57, no. 2, pp. 265–293, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. J. B. Rawlings and K. R. Muske, “The stability of constrained receding horizon control,” IEEE Transactions on Automatic Control, vol. 38, no. 10, pp. 1512–1516, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. G. Franzè, “A nonlinear sum-of-squares model predictive control approach,” IEEE Transactions on Automatic Control, vol. 55, no. 6, pp. 1466–1471, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  6. S. L. de Oliveira Kothare and M. Morari, “Contractive model predictive control for constrained nonlinear systems,” IEEE Transactions on Automatic Control, vol. 45, no. 6, pp. 1053–1071, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. L. Magni and R. Scattolini, “Stabilizing model predictive control of nonlinear continuous time systems,” Annual Reviews in Control, vol. 28, no. 1, pp. 1–11, 2004. View at Google Scholar
  8. P. Mhaskar, N. H. El-Farra, and P. D. Christofides, “Robust predictive control of switched systems: satisfying uncertain schedules subject to state and control constraints,” International Journal of Adaptive Control and Signal Processing, vol. 22, no. 2, pp. 161–179, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, “Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements,” Automatica, vol. 48, no. 9, pp. 2007–2015, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. G. Huang and S. Wang, “Use of uncertainty polytope to describe constraint processes with uncertain time-delay for robust model predictive control applications,” ISA Transactions, vol. 48, no. 4, pp. 503–511, 2009. View at Google Scholar
  11. E. F. Camacho, D. R. Ramirez, D. Limon, et al., “Model predictive control techniques for hybrid systems,” Annual Reviews in Control, vol. 34, no. 1, pp. 21–31, 2010. View at Google Scholar
  12. J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, “Probability-guaranteed H finite-horizon filtering for a class of nonlinear time-varying systems with sensor saturations,” Systems & Control Letters, vol. 61, no. 4, pp. 477–484, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. W. Al-Gherwi, H. Budman, and A. Elkamel, “Robust distributed model predictive control: a review and recent developments,” Canadian Journal of Chemical Engineering, vol. 89, no. 5, pp. 1176–1190, 2011. View at Google Scholar
  14. J. L. Garriga and M. Soroush, “Model predictive control tuning methods: a review,” Industrial and Engineering Chemistry Research, vol. 49, no. 8, pp. 3505–3515, 2010. View at Google Scholar
  15. H. J. Ferreau, H. G. Bock, and M. Diehl, “An online active set strategy to overcome the limitations of explicit MPC,” International Journal of Robust and Nonlinear Control, vol. 18, no. 8, pp. 816–830, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  16. G. Pannocchia, J. B. Rawlings, and S. J. Wright, “Fast, large-scale model predictive control by partial enumeration,” Automatica, vol. 43, no. 5, pp. 852–860, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. Y. Wang and S. Boyd, “Fast model predictive control using on-line optimization,” IEEE Transactions on Control System Technology, vol. 18, no. 2, pp. 267–278, 2010. View at Google Scholar
  18. M. V. Kothare, V. Balakrishnan, and M. Morari, “Robust constrained model predictive control using linear matrix inequalities,” Automatica, vol. 32, no. 10, pp. 1361–1379, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  19. H. H. J. Bloemen, T. J. J. van den Boom, and H. B. Verbruggen, “Optimizing the end-point state-weighting matrix in model-based predictive control,” Automatica, vol. 38, no. 6, pp. 1061–1068, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  20. F. A. Cuzzola, J. C. Geromel, and M. Morari, “An improved approach for constrained robust model predictive control,” Automatica, vol. 38, no. 8, pp. 1183–1189, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  21. B. C. Ding, Y. G. Xi, and S. Y. Li, “A synthesis approach of on-line constrained robust model predictive control,” Automatica, vol. 40, no. 1, pp. 163–167, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  22. W.-J. Mao, “Robust stabilization of uncertain time-varying discrete systems and comments on: ‘An improved approach for constrained robust model predictive control’,” Automatica, vol. 39, no. 6, pp. 1109–1112, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  23. B. Pluymers, J. A. K. Suykens, and B. De Moor, “Min-max feedback MPC using a time-varying terminal constraint set and comments on: ‘Efficient robust constrained model predictive control with a time varying terminal constraint set’,” Systems & Control Letters, vol. 54, no. 12, pp. 1143–1148, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  24. Z. Y. Wan, B. Pluymers, M. V. Kothare, and B. de Moor, “Comments on ‘Efficient robust constrained model predictive control with a time varying terminal constraint set’,” Systems & Control Letters, vol. 55, no. 7, pp. 618–621, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  25. Z. Y. Wan and M. V. Kothare, “Efficient robust constrained model predictive control with a time varying terminal constraint set,” Systems & Control Letters, vol. 48, no. 5, pp. 375–383, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  26. J. H. Lee, “Model predictive control: review of the three decades of development,” International Journal of Control, Automation, and Systems, vol. 9, no. 3, pp. 415–424, 2011. View at Google Scholar
  27. L. Blackmore, M. Ono, A. Bektassov, and B. C. Williams, “A probabilistic particle-control approximation of chance-constrained stochastic predictive control,” IEEE Transactions on Robotics, vol. 26, no. 3, pp. 502–517, 2010. View at Google Scholar
  28. J. Hu, Z. Wang, H. Gao, and L. K. Stergioulas, “Robust sliding mode control for discrete stochastic systems with mixed time-delays, randomly occurring uncertainties and randomly occurring nonlinearities,” IEEE Transactions on Industrial Electronics, vol. 59, no. 7, pp. 3008–3015, 2012. View at Google Scholar
  29. J. Hu, Z. Wang, B. Shen, and H. Gao, “Gain-constrained recursive filtering with stochastic nonlinearities and probabilistic sensor delays,” IEEE Transactions on Signal Processing, vol. 61, no. 5, pp. 1230–1238, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  30. S. Mollov, T. van den Boom, F. Cuesta, et al., “Robust stability constraints for fuzzy model predictive control,” IEEE Transactions on Fuzzy Systems, vol. 10, no. 1, pp. 50–64, 2002. View at Google Scholar
  31. L. Wu, X. Su, P. Shi, and J. Qiu, “Model approximation for discrete-time state-delay systems in the T-S fuzzy framework,” IEEE Transactions on Fuzzy Systems, vol. 19, no. 2, pp. 366–378, 2011. View at Google Scholar
  32. Y. Xia, H. Yang, P. Shi, and M. Fu, “Constrained infinite-horizon model predictive control for fuzzy discrete-time systems,” IEEE Transactions on Fuzzy Systems, vol. 18, no. 2, pp. 429–436, 2010. View at Google Scholar
  33. L. Zhang and E.-K. Boukas, “Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities,” Automatica, vol. 45, no. 2, pp. 463–468, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  34. L. Zhang, E.-K. Boukas, and J. Lam, “Analysis and synthesis of Markov jump linear systems with time-varying delays and partially known transition probabilities,” IEEE Transactions on Automatic Control, vol. 53, no. 10, pp. 2458–2464, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  35. M. C. de Oliveira, J. Bernussou, and J. C. Geromel, “A new discrete-time robust stability condition,” Systems & Control Letters, vol. 37, no. 4, pp. 261–265, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  36. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, vol. 15, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 1994. View at Publisher · View at Google Scholar · View at MathSciNet