Abstract

This paper deals with the problem of robust model predictive control (RMPC) for a class of linear time-varying systems with constraints and data losses. We take the polytopic uncertainties into account to describe the uncertain systems. First, we design a robust state observer by using the linear matrix inequality (LMI) constraints so that the original system state can be tracked. Second, the MPC gain is calculated by minimizing the upper bound of infinite horizon robust performance objective in terms of linear matrix inequality conditions. The method of robust MPC and state observer design is illustrated by a numerical example.

1. Introduction

Model predictive control (MPC) [1, 2] is an important method to handle control problems with systems having input, state, and output constraints [37]. Its current control action is obtained by solving an online, at each time instant, open-loop constrained infinite horizon optimization problem. The current state of the system is treated as the initial state of optimal control problems [8], and only the first optimal control sequence is implemented [9]. In this line of the research, a model is used to predict the future behavior of the real system and obtain an optimal control sequence which satisfies the input, state, and output constraints. So the model quality is very vital in the robust MPC.

From such a viewpoint, it is a significant problem to develop the MPC algorithms, which are robust against model uncertainties, and guarantee a certain control performance objective [1017]. The type of MPC has been studied for many years [1821]. Recently, MPC is an efficient method in automatic control theory and industrial processes. In [22], a robust MPC is presented for a class of uncertain systems and then applied to angular positioning system. Besides, the robust MPC developed based on explicit model uncertainty descriptions has been proposed [23]. As one of such descriptions, polytopic system model is considered to be an effective one for the uncertainty modeling of linear time-varying (LTV) systems.

With the increasing requirement of reliability, environmental sustainability, and profitability, we begin to apply the communication networks to the practical industrial process [2431]. However, in the real systems, the measurement data may be transferred through multiple sensors. The sensors are connected to the controller via a network, which is shared by other networked control systems. Due to sensor aging and sensor temporal failure, the measured data may be transferred through many sensors in the real process via networks so that successive packet dropouts are unavoidable in the real industry process [3234]. Moreover, another significant factor, input constraints, which stems from the restrictions of engineering equipment, cannot be ignored in industrial processes. Therefore, the robust MPC controller should be developed by considering packet dropouts and constraints.

In this paper, we describe an industrial process system as the linear time-varying system with packet dropouts from the MPC controller and dynamic output controller to the plant. Furthermore, a Bernoulli random binary distribution with known probabilities is used to describe the packet dropouts phenomena. Thus, the MPC controller can be analyzed and designed to guarantee that the closed-loop system is stochastically stable. Additionally, the Lyapunov function is adopted to handle the problem of designing the controller and state observer, which make the result less conservative. A numerical example is proposed to prove the effectiveness of the design method. The main merit of this paper is the following one: a robust MPC controller is developed for a control system with uncertainties, saturations, and packet dropouts under time-varying probabilities.

The organization of the paper is given as follows. The considered problem is denoted in Section 2. Section 3 formulates the infinite horizon robust performance objective. The procedure of obtaining the state observer gains is represented in Section 4. And the MPC controller gain and dynamic output controller are represented in Sections 5 and 6, respectively. We employ two numerical examples to certify the feasibility of the mentioned method in Section 7. Finally, Section 8 presents the conclusion.

Notation. The notation used in the paper is standard. The superscript “” represents the matrix transposition and shows the -dimensional Euclidean space. And and 0 denote the identity matrix and zero matrix, respectively. The notations and mean that and are real symmetric and positive definite; the notation refers to the norm of matrix that is defined by , where “” stands for the trace operator. refers to the usual Euclidean vector norm. The symbol “” denotes the elements below the main diagonal of a symmetric block matrix. means the occurrence probability of the event “”.   and represent the expectation of event and the expectation of conditional on , respectively.

2. Problem Formulation

First, we consider the linear time-varying system as follows: where is the plant state vector; is the output of the plant; is the control input; , , and are system parameters. is a given set. For the polytopic systems, the set is the polytope: where represents the convex hull. Besides, if , then for , , and , . When , it means that the system in (2) is becoming a linear time-invariant system.

In this paper, the system matrices are known clearly. However, the future matrices , , are indefinite but well known which change in a given polytope set . We assume that the linear discrete-time system (1) has input constraints [35], which satisfies at each instant , as follows: where refers to the peak bound of the input .

In this paper, we consider a state observer to estimate the state of the plant (1) as follows: where refers to the estimated state of and refers to the state observer gain. It is assumed that the initial estimate state and the initial output are known exactly. Note that system matrices , output , and the estimated state , at each instant , are well known so that can be calculated by using (4). However, the estimated state ,  , is uncertain.

The MPC controller is to be determined as follows: where is the feedback control gain at time instant . In MPC, only the first calculated control input is complemented. At the next sampling time , the state is measured and is repeatedly computed by the optimization.

The form of dynamic output controller is as follows: where , and are the unknown parameters of dynamic output controller.

Combining (1), (6), and (10), the augmented system is given as follows: where , , and

In practical industrial processes, we have to not only consider the uncertainties in (2), but also consider a more important factor that is data loss. Data losses are inevitable in communication networks, because the measurement data may be transferred through many sensors in the real process.

Here, we consider a stochastic variable to describe successive packet dropouts in a random way. The characteristic of the successive packet dropouts is the latest received data that will be sent to the state observer. If the data is not updated, the control input will keep unchanged. More important is that taking the packet dropouts into account is consistent with the real situation design in industry. Therefore, signals transmitted through the communication network to the estimated state observer can be expressed as where is the control input that is computed by using the feedback gain which is determined at each instant . is the actual measurement signal of and transmitted to the plant (1) and state observer (4). We assume that the stochastic variable satisfies and . Let . Then, and .

In a similar situation, we consider stochastic variables and to describe successive packet dropouts in a random way: where is the control input, is the actual measurement signal of which is transmitted to the plant (1), and is the actual signal of which is transmitted to the dynamic output controller, respectively. We assume that the stochastic variables and satisfy and  and and , respectively. Let and , respectively. Then, , , and , respectively.

The following Lemma is used in the process of proof.

Lemma 1. Let and and and be the real matrices of appropriate dimensions; then if and only if there exists a positive scalar such that or, equivalently,

3. Infinite Horizon Robust Performance Objective Analysis

In this paper, the aim of the robust MPC is to determine the model predictive control law , , which regulates the initial state of system reaching the origin, making the robust performance objective minimum in the worst case and ensuring that the closed-loop system is asymptotically stable. In order to determine the control input under input constraints, we minimize the performance objective through minimizing its upper bound. Consider the performance objective by minimizing the infinite horizon objective function as follows: where in which and are the positive-definite matrices that should be given firstly. It is worth noting that the choices of and are unclear, which are usually adjusted according to the practical situation.

We consider transferring the minimization of into minimizing its upper bound. Therefore, we assume that the following inequality holds for all , , , satisfying the input constraints (3), and for any , where and is a symmetric matrix.

Proof. Firstly, we construct a Lyapunov function , where is a symmetric matrix. We assume that . By summing (14) from to , we can obtain the following inequality: Let be the upper bound of and we have the following inequality: In a similar situation, we consider transferring the minimization of into minimizing its upper bound. Therefore, we assume that the following inequality holds for all , , , satisfying the input constraints (3), and for any , :

4. State Observer Design

In this section, we consider the design of state observer. The form of state observer is (4). Therefore, the state estimation error is . Combining (1) with (4), we can derive the error dynamics as follows:

Next we use LMI to determine the state observer gain . First, we should construct a Lyapunov function as follows: We assume that the Lyapunov function satisfies the following inequality: where    is a decay rate and is a weighting matrix which are determined by the designer. According to the assumption (20), the state observer gain can be obtained by the following theorem.

Theorem 2. If there exist and , one can derive the following inequality: where the state observer gain is derived by. Therefore, the estimated state can track the original system states and when .

Proof. Inequality (20) is equivalent to where . However, inequality (22) can be converted into the following form: By using Schur complement, we can obtain the following LMI constraint: Let ; inequality (24) is equivalent to It is established for all . And the state observer gain is obtained by .

5. MPC Controller Design

Now our goal is to design a robust MPC to generate an optimal control law so that the performance objective can be achieved. In the following, Theorem 3 is given to guarantee that the desired performance objective can be implemented by computing in (5) for linear discrete-time systems (1) under constraints (3) and packet dropouts (9). We can obtain the following theorem.

Theorem 3. Assume that the uncertainty is a prescribed polytope and the probabilities take values from to . The control input (5) under a series of constrained input (3) that minimizes is obtained by where is symmetric and the upper bound of can be solved by the following minimization problem:
subject towhere and is a positive scalar and .

At each future time instant, there exists the optimal control law to minimize the performance objective. Finally, the closed-loop system will be asymptotically stable.

Proof. In order to achieve the robust MPC performance objective under input constraint and packet dropouts, there exist three LMIs that need to be feasible. The first LMI is obtained by (16); by minimizing the upper bound of , we can get the second LMI; the third LMI is obtained by solving the input constraint (3). In the following, we give the proof.
Let and use Schur complement we can obtain inequality (28). Combining (4), (5), and (9) with (14), we have Since , inequality (30) is equivalent to by pre-multiplying and post-multiplying (31) by and , respectively. And (31) is satisfied if there exists a weighting positive-definite matrix such that Inequality (32) is equivalent to where Inequality (33) is equivalent toBy using Schur complement repeatedly and Lemma 1, we can obtain the following inequality:where and and is a positive scalar. Combining (3), (5), (9), (16), and (26), we have By using Schur complement, we can obtain

6. Dynamic Output Controller

Theorem 4. Assume that the uncertainty is a prescribed polytope and the probabilities and take values from to . The control input (5) under a series of constrained inputs (3) that minimizes is obtained by where is symmetric and the upper bound of can be solved by the following minimization problem: subject to where, , , is a positive scalar, and .

Proof. Let and, using Schur complement, we can obtain inequality (41). Combining (10) and (7) with (18), we have by premultiplying and postmultiplying (46) by and , respectively. By further transformation, we can achieve where ,  , , and . By using Schur complement repeatedly and Lemma 1, we can obtain the following inequality:where , , , and is a positive scalar.

7. Simulation Results

Consider the following uncertain polytope system: where the polytope formed by the two local discrete models is as follows: In the simulation, let. The input constraint is as follows: In Theorem 2, the decay rate and weighting matrix are given as Then, the infinite horizon robust performance objective has the following weighting matrices: .

Another weighting matrix is given as where the initial states of the real system (1) and the state observer are and , respectively.

The state observer gain and MPC controller gain are obtained as and , respectively.

In Figure 1, the two curves are the state of the original closed-loop system. The solid line represents the real state () of the original closed-loop system and the dashed line denotes its estimated state (), respectively, in Figures 2 and 3. In Figure 4, the curve denotes the control input under the robust performance objective with the constrained input. The curve shows the output of closed-loop system, which converges to zero when in Figure 5.

Consider the following uncertain polytope system: where the polytope formed by the two local discrete models is as follows: In the simulation, let . The input constraint is as follows: Then, the infinite horizon robust performance objective has the following weighting matrices: where the initial states of the real system (1) and the dynamic output controller are and , respectively.

The dynamic output controller gains, , and , are obtained as

In Figure 6, the two curves are the state of the original open-loop system. In Figure 7, the two curves are the state of the original closed-loop system. In Figure 8, the two curves are the state of the dynamic output controller. The curve denotes the control input under the robust performance objective with the constrained input in Figure 9. The curve represents the control output of dynamic output controller in Figure 10. The curve shows the output of closed-loop system, which converges to zero when in Figure 11.

8. Conclusion

In this paper, we have designed an output-feedback MPC to solve the problem of the robust MPC with input constraints and successive packet dropouts. The method makes use of infinite horizon min-max algorithm with LMI constraints. First, we have constructed a state observer. Then, the optimization problem can be solved by dealing with some LMI constraints. We can obtain the control input sequence by dealing with the infinite horizon robust MPC and input constraint based on the estimated state of observer. From the simulation, the design method of robust MPC with input constraint has been verified feasiblely. As a future work, we will develop the output-feedback MPC algorithm combined with nonlinear MPC and its optimization.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.