Abstract

This paper briefly reviews the development of nontracking robust model predictive control (RMPC) schemes for uncertain systems using linear matrix inequalities (LMIs) subject to input saturated and softened state constraints. Then we develop two new tracking setpoint RMPC schemes with common Lyapunov function and with zero terminal equality subject to input saturated and softened state constraints. The novel tracking setpoint RMPC schemes are able to stabilize uncertain systems once the output setpoints lead to the violation of the state constraints. The state violation can be regulated by changing the value of the weighting factor. A brief comparative simulation study of the two tracking setpoint RMPC schemes is done via simple examples to demonstrate the ability of the softened state constraint schemes. Finally, some features of future research from this study are discussed.

1. Introduction

Model predictive control (MPC) refers to a class of algorithms that compute a sequence of manipulated variable adjustments in order to optimize the future behavior of a plant. MPC works out the optimal open-loop-manipulated input trajectory that minimizes the difference between the predicted plant behavior and the desired plant behavior. It differs from other control theories in that the optimal control problem is solved on-line for the current state of the plant, rather than determined off-line as a feedback policy. MPC has been widely applied in the petrochemical and related industries because of its ability to handle input and output constraints in the optimal control problem. However MPC is not designed to explicitly handle plant-model uncertainty and such system might perform very poorly when implemented on a physical system which is not exactly described by the model [1].

Robust model predictive control (RMPC) is a class of predictive control schemes that counts for the modeling error in the controller design. Instead of forecasting the system behavior by using one process model as in MPC, RMPC forecasts the system behavior for all possible models in the uncertainty set. The optimal actions are determined through the min-max optimization, which minimizes the deviation of the forecasted behavior from the desired behavior for the model with the largest deviation.

Modeling the uncertainty has been studied by a number of researchers. The two common types of uncertainties are parametric uncertainty and bounded disturbance regions. Magni et al. [2] and Rami and Zhou [3] proposed using multiplicative input and state uncertainties. Parametric uncertainties are modeled as either bounded uncertainty regions or a polytopic set.

Researchers have proved robust stability for MPC by using the closed loop stability criteria, that is, terminal cost and terminal stability constraints. The simplest way to enforce stability with a finite prediction horizon is to add a so-called zero-terminal equality at the end of the prediction horizon in Keerthi and Gilbert [4]. One disadvantage of a zero terminal equality is that the system must be brought into the origin in a finite time. This leads, in general, to infeasibility for the short control horizon length; that is, the horizon length may not be long enough to satisfy the constraints from all states.

Mayne and Michalska [5] developed a concept of terminal region with the dual mode approach. The terminal region concept can be viewed as a relaxation of the zero terminal equality. Several other schemes have been proposed by Rawlings [6], Morari and Lee [7], Allgower et al. [8], and Chen and Allgower [9], that try to overcome the use of zero terminal equality. Most of them use the terminal region constraint and/or terminal penalty with an assumption that there exists a neighborhood of the origin for which the linear feedback control law is asymptotically stabilizing. This neighborhood serves as the terminal region for the finite horizon controller. Minh and Afzulpurkar [10] developed a MPC scheme without a terminal region using the softened state constraints.

The regulator determines the optimal manipulated input trajectory based on the system behavior predictions of the models. Scokaert and Mayne [11] proposed a min-max control law that steered the state into an invariant region in which the state feedback law guaranteed convergence to the origin for all states. The main disadvantage of min-max control theories is the computational complexity. These optimization problems are expensive to solve on-line.

Recent developments in the theory and application of the optimization involving linear matrix inequalities (LMIs) to MPC have resulted in a new class of methodologies for MPC because it is now possible to recast much of existing robust control theory in the framework of LMIs and because optimization problems can now be solved and implemented on-line (Henrion et al. [12] and Primbs [13]).

Kothare et al. [14] proposed using an LMI-based optimizer to find robustly stabilizing state feedback gains that minimizes the worst performance objective for a given model uncertainty description. Casavola et al. [15] developed the concept to uncertain systems with input-saturated constraints.

The processes might have both input and state constraints; so a difficulty will arise due to the inability to satisfy the state constraints. Since the RMPC regulator is designed for online implementation, any infeasible solution of the optimization problems cannot be tolerated. To guarantee the system stability once the state space violates the constraints, Minh and Afzulpurkar [16] have developed an RMPC algorithm for input-saturated and softened state constraints using penalty terms added into the objective function. These terms keep the state violation at low values until a constrained solution is returned. The state violation can be regulated by changing the value of the weighting factor.

The RMPC algorithm in Minh and Afzulpurkar [16] was restricted to the finite horizon regulator with a zero target or nontracking setpoint. In this paper, we extend the RMPC algorithm in [16] for tracking setpoint RMPC algorithm. The new RMPC algorithm allows the system to track a reference output trajectory different from the origin.

Our paper is organized as follows. Section 2 gives a description of the uncertain system used by the authors. Section 3 summarizes the nontracking RMPC scheme for input-saturated and softened state constrains. Section 4 develops a tracking setpoint RMPC scheme using a common Lyapunov function. Section 5 develops a tracking setpoint RMPC scheme using a zero terminal equality. And finally, in Section 6, conclusions and some features of future research are discussed.

2. Description of Uncertain System

For modeling uncertain system, instead of relying on the parameterization of the process, our uncertain system is described by the following discrete state-space model similar to the multiple models described by Kothare et al. [14]: in which are the time-varying state-space model matrices; describes the relationship between the output and the state without any uncertainty; , and are the state, input and output vectors, respectively. When model uncertainty is present, the exact plant model is unknown, the model uncertainty belongs to a polytopic set , where refers to a convex hull or , where and .

In RMPC, the regulator determines the optimal manipulated input trajectory based on the system behavior predictions of the models. We use a scheme similar to the method introduced by Scokaert and Mayne [11] that produces a tree trajectory for all possible time-varying combinations of the models. Figure 1 illustrates all possible state trajectories of all models with over the prediction horizon length . and denote the possible predicted state and input vectors h at time , for .

The total number of nodes depends on the total number of possible state trajectories of all models L and the prediction horizon length:

The total number of branches is dependent on the number of models L and the prediction horizon length : .

The objective of the control problem is to find the control actions that, once implemented, cause all branches in the tree trajectory to converge to a robust control invariant set in which a state feedback law guarantees convergence to the origin for all states.

3. Summary of Nontracking RMPC Scheme for Input-Saturated and Softened State Constraints

In the infinite horizon RMPC, Kothare et al. [14] proposed an RMPC scheme using LMIs that minimizes a robust performance objective function in quadratic form at each sampling time k: where , where and are symmetric weighting matrices; is referred to as the infinite horizon RMPC. The minimization is performed with a constant feedback control law .

This is a min-max problem. The maximization over the set and corresponding to choosing that time varying model would lead to the largest or “worst-case” value of among all models in . Suppose that there exists a Lyapunov function with and with . For the objective function to be finite, and, hence, , the system will be stable if the Lyapunov function is decreasing, that is, . Thus, for all , or an upper bound of . Substituting and with , then we have . Set and ( and scalar ); the above equation can be transformed as Equation (4) is equal to an LMI (using Schur complement) of For the unconstrained infinite horizon RMPC, we can find a state feedback gain in the control law for all model uncertainties and for , which guarantees decreasing robust performance since we have found out a common Lyapunov matrix for all models in (5).

Then, the infinite horizon RMPC scheme for the hard input-saturated and output-saturated constraints can be set up as follows (6)–(10): subject to with denoting the iith entry of the unit matrix for input constraints, and/or with denoting the iith entry of the unit matrix for output constraints.

At each time instant and based on the current state , this algorithm generates a feasible state feedback . We then immediately calculate the control input . There is no free moving horizon prediction; thus, this algorithm can be considered as an RMPC scheme with a prediction horizon length . The stability of the uncertain systems in (1) is guaranteed if we can always find a common Lyapunov matrix for all models in (8).

When both hard input and output/state constraints are imposed, difficulty will arise from the inability of the optimizer to satisfy the state/output constraints due to the constraints on the input. Based on the above infinite horizon RMPC scheme, Casavola et al. [15] formulated a finite horizon RMPC scheme for the input-saturated constraints. Then, Minh and Afzulpurkar [16] developed a finite horizon RMPC scheme for the input-saturated and the softened state constraints. The idea to develop an RMPC scheme with the softened state constraints comes from the reality that RMPC regulator is designed for an online implementation; any infeasible solution of the optimization problem cannot be tolerated. Input constraints are based on the physical limits and can be considered as hard constraints. If the state constraints are not strictly imposed and can be violated somewhat during the evolution of the system, they can be considered as the softened constraints. The finite horizon RMPC control sequence with free control moves is presented as follows: where is a feasible and robustly stabilizing state-feedback and is free control moves selected on-line by the MPC solver corresponding to initial free control moves as referred to in [17, 18]. For simplicity, from the following sections, the authors assume that the prediction horizon length is set equal to the control horizon length, that is, . The stability of the system is enforced after N free moves with the constraint of input for . It is assumed that the state feedback gain and a common Lyapunov matrix and variable are existed and jointly satisfied the LMIs of infinite horizon RMPC scheme at . Then , and free moves of input for are selected by minimizing the following objective function: where and are symmetric weighting matrices; is referred to as the finite horizon RMPC subject to the constraints that for all and where denotes the vertices and is the ellipsoidal set . The set denotes the convex hull of all i-step ahead state predictions from at time k through application of the input sequence . The robustness of MPC can increase if we can relax some state constraints to softened constraints with penalty terms added into the objective function (12); these terms keep the state violation at low values until a constrained solution is returned: subject to where is the weighting factor for the softened state constraints (, usually a small value) and represents the state penalty terms added to the RMPC objective function. depends on the current state , application of the input sequence and the weighting factor . When the state constraints are satisfied, the RMPC algorithm in (14) will set all softened state penalty terms .

In the next parts, we will develop novel tracking setpoint RMPC schemes using LMIs subject to input-saturated and softened state constraints.

4. Tracking Setpoint RMPC Scheme with Common Lyapunov Function

In reality, the system output is usually required to track a reference trajectory. In some operations the desired values might change overtime. Suppose that the system output is required to track a target vector (called the setpoint) and the desired equilibrium state and input are constant points and , different from the origin. For the uncertain system in (1), we assume that are constant unknown matrices and and are feasible; that is, they satisfy the imposed constraints and , and . The objective function for the infinite horizon RMPC with the tracking setpoint is as follows: where and are symmetric weighting matrices. We can define a shifted state , a shifted input , and a new weighting matrix to reduce the problem in (15) to standard form: The infinite horizon RMPC for tracking setpoint in (16) becomes similar to the nontracking infinite horizon RMPC in (3). Then, the finite horizon RMPC control sequence for tracking setpoint with N free control moves is developed for the shifted state and input: And the objective function of the finite horizon RMPC for tracking setpoint is as follows: where is referred to as the tracking setpoint RMPC subject to the shifted input and state constraints that and , for all .

The saturated input and state constraints for the new shifted input and state are as follows: In the tracking setpoint RMPC scheme, the saturated input and state constraints should be imposed in shifted forms as in (19) and (20). A finite tracking setpoint RMPC scheme with a common Lyapunov function to select subject to the shifted input and softened state constraints is developed as follows.

(i) Initialization
Given = , find ,,, = + where are the penalty terms for the softened state constraints in all possible nodes added to the objective function, subject to(0.i)(0.ii) subject to the shifted state constraints in (20), and is the weighting factor for the softened state constraints.(0.iii)(0.iii) for , subject to the shifted input constraints in (19).

(ii) Operation
Step 1. For any , given , and , find where and are all possible nodes and branches, subject to(i)(ii)(iii) subject to the shifted state constraints in (20), and is the weighting factor for the softened state constraints,(iv), for , subject to the shifted input constraints in (19).Step 2. Feed the plant by the input .Step 3. For any , given , and , find again an update common Lyapunov matrix and a feasible state feedback gain by finding , subject to(v)(vi)(vii)(viii)Step 4. , , and go to Step 1.

We illustrate this tracking setpoint RMPC scheme with the following example.

Example 1. We consider the following set of uncertain models with model 1: and model 2: and the system output: .

A simulation is conducted with the system output to track a reference setpoint at . The real model has , and the weighting matrices are set and . The shifted input and state are calculated as and . Now it is assumed to impose a hard constraint on the input and a softened constraint on the state . Then, the new shifted input and state constraints for this tracking setpoint RMPC are and .

Figure 2 shows the input and the output of the tracking setpoint RMPC performance with receding horizons and and a weighting factor for the softened state constraint . The hard constraint on input or , near the equilibrium position of input causes oscillations on the output. Results similar to those by Minh and Afzulpurkar [16] are obtained; a significant reduction in conservativeness is achieved by simply letting one free move be added as the input commands are closer to the constraint boundary Therefore, the RMPC output with a longer receding horizon approaches its setpoint faster than that with a shorter receding horizon .

Figure 3 compares the output speeds to approach the setpoint reference with different weighting values . Due to the decreasing of the weighting values , the values of the state penalty terms will increase and keep the state violations at lower values. Then, the RMPC with smaller value of approaches the setpoint reference faster than the RMPC with greater value of . Softened state penalty terms help RMPC to find a solution when the state values are violated constraints, and when the state constraints are satisfied, the RMPC will set all state penalty terms to zero.

Figure 4 shows the results of controlling the state violation by changing the value of . For the values , 1/10, and 1/100, we obtained the maximum values of , 2.6874, and 2.3752, respectively. By decreasing the weighting values , we could significantly reduce the state constraint violations . However, when we further decrease the values of , as shown by this example with , we cannot further reduce the state constraint violations since we have reached a control “limit”. If we set a very small weighting value , the maximum value of is still unchanged at 2.3675.

5. Setpoint Tracking RMPC Scheme with Zero Terminal Equality

A similar result in [19] can be applied for the above tracking setpoint RMPC scheme with a common Lyapunov matrix to find a robust stabilizing state feedback for the uncertain systems. However we still have other methods to ensure the stability within a finite prediction horizon. It is to use a terminal region constraint at the end of the prediction horizon.

We consider the objective function of the RMPC in (18); the simplest possibility to enforce the stability with a finite prediction horizon is to add a so-called zero-terminal equality at the end of the prediction horizon, that is, to add a zero constraint for the terminal prediction state: or or . The state-feedback gain used to define the input sequence after the first N free moves is also set to zero, . The objective function in (18) can be rewritten as

Equation (34) is equivalent to the following LMI (using the Schur complement): To prove the stability of the system, we denote and , the Lyapunov function and . Then, Since the Lyapunov function is decreasing, the system with a zero terminal equality becomes always stable.

Another tracking setpoint RMPC scheme to select subject to a zero terminal equality is developed as follows.

Step 1. For , given , find where are all possible nodes of the uncertain system, subject to(i)(ii)(iii) subject to the shifted state constraints in (20), and is the weighting factor for the softened state constraints,(iv), , subject to the shifted input constraints in (19).

Step 2. Feed the plant by the input

Step 3. , , and go to Step 1.

The zero terminal equality poses difficulty to the optimization algorithm. Choosing the horizon length N may be challenging because, if N is not chosen to be long enough, RMPC scheme may not have a feasible solution. For the long prediction length N and for the large system, the online min-max optimization RMPC becomes more computational expensive. Another shortfall of the zero terminal equality constraint is that the solution to the minimization problem often takes unusual or unnatural paths in order to satisfy the constraint.

Example 2. We consider the same uncertain models in the example 1 and simulate the zero terminal equality RMPC scheme with a weighting factor for the softened state constraints . The shortest horizon length for the closed-loop stability of this RMPC with the initial state is . The RMPC computation becomes more expensive since the total number of branches is or we have to calculate the zero terminal equality for 64 nodes of the terminal prediction states .

Figure 5 compares the performance of the RMPC with a zero terminal equality and RMPC with a common Lyapunov function. We can see that the input of RMPC scheme with a zero terminal equality follows some unusual path in order to satisfy the terminal equality constraint, rather than yields a solution similar to the RMPC with a common Lyapunov function. Therefore, the output of RMPC with a zero terminal equality oscillates longer than that of RMPC with a common Lyapunov function. Or RMPC with a common Lyapunov function has a better performance than RMPC with zero terminal equality.

6. Conclusion

We have presented two tracking setpoint RMPC schemes with a common Lyapunov function and with a zero terminal equality subject to the saturated input and softened state constraints. It is assumed that all states of the MPC models are directly measured and that there are no disturbances. As the main advantage, soften state constraints scheme proves its ability to guarantee stability once the output setpoint leads to the violation of the state constraints.

For the same prediction horizon length, softened state constraints RMPC becomes less conservative than RMPC with only hard input constraints. Another advantage of the softened state RMPC schemes is its ability to regulate the state violation by changing the weighting factor. RMPC scheme with longer prediction horizon approaches its setpoint faster than that with shorter prediction horizon. And with the same prediction horizon length, RMPC with a common Lyapunov function approaches its setpoint faster than RMPC with a zero terminal equality.

However additional analysis is required to conclude the effectiveness of the novel schemes regarding to achievable performance and the corresponding CPU elapsed time because of the additional penalty terms added to the objective function. For long prediction lengths or/and in large systems, the online RMPC optimization becomes more computationally extensive. Results presented in this research must be viewed as an initial step towards a more comprehensive study on the theoretical properties of the softened state constraint schemes and their application.

Acknowledgment

The authors would like to thank the comments provided by the anonymous reviewers and editor, which help the authors improve this paper significantly. The authors have taken into consideration all comments of the reviewers in the final version of the paper. This work was supported by Universiti Teknologi PETRONAS (UTP).