Abstract

In this work, it is presented a new contribution to the design of a robust MPC with output feedback, input constraints, and uncertain model. Multivariable predictive controllers have been used in industry to reduce the variability of the process output and to allow the operation of the system near to the constraints, where it is usually located the optimum operating point. For this reason, new controllers have been developed with the objective of achieving better performance, simpler control structure, and robustness with respect to model uncertainty. In this work, it is proposed a model predictive controller based on a nonminimal state space model where the state is perfectly known. It is an infinite prediction horizon controller, and it is assumed that there is uncertainty in the stable part of the model, which may also include integrating modes that are frequently present in the process plants. The method is illustrated with a simulation example of the process industry using linear models based on a real process.

1. Introduction

Model predictive control has achieved a remarkable popularity in the process industry with thousands of practical applications [1]. One of the reasons for this industrial acceptance is the ability of MPC to incorporate constraints in the control problem. However, one additional desirable characteristic, still not attended by commercial MPC packages, is closed-loop stability in the presence of model uncertainty, which is usually related to the nonlinearity of the real system. When model uncertainty is considered in the synthesis of the model predictive control, the majority of existing robust algorithms usually demand a computer effort that is prohibitive for practical implementation [2]. From the application viewpoint, we could not find in the control literature a satisfactory solution to the robust MPC problem with output feedback and input constraints. In Rodrigues and Odloak [3], it is presented a formulation to the robust unconstrained MPC with output feedback where stability is achieved through the explicit inclusion of a Lyapunov inequality into the control optimization problem. Later, in Rodrigues and Odloak [4] the method was extended to allow the switching of active input constraints during transient conditions. In Perez [5] the non-minimal state space model (realigned model) proposed in Maciejowski [6] was extended to the incremental form, and this model was used in the controller proposed in Rodrigues and Odloak [4] to produce a robust controller with reduced computational effort by avoiding the use of a state observer, since the state of the realigned model is directly read from the plant. In this approach, the MPC solves the optimization problem that produces the control law in two stages. The first stage is solved offline and computes a family of linear controllers in which the Lyapunov inequalities are added to the control problem so as to ensure state contraction (stability). These linear controllers match all possible configurations of saturation of manipulated variables for a given set of variables that need to be controlled. The second stage is solved online and computes a convex combination of these linear controllers through an optimization problem that includes all the input bounds. These combinations span all possible control configurations in terms of controlled outputs and manipulated inputs. For each control configuration, the linear controller is robust to all process models defining the multiplant uncertainty. The main disadvantage of this approach is that the number of possible control configurations corresponding to the possible combinations of saturated inputs may become very high in systems with large dimension. Then, the off-line step of the control problem may be computationally very expensive or there may not be a feasible solution. The realigned model was applied in González et al. [7] to the development of an infinite horizon MPC nominally stable to open-loop stable and integrating systems. The same non-minimal model was applied in González and Odloak [8] to the development of a robust MPC to uncertain stable systems.

The main objective of this work is to extend the controller proposed in González and Odloak [8] to consider systems with stable and integrating outputs. The method proposed here is based on the non-minimal state space model formulation presented in Maciejowski [6], and the control problem considers that one may have uncertainties in the stable part of the model, while the model for the integrating part is perfectly known. In Section 2, the non-minimal (realigned) model in the incremental form is developed for systems with stable and integrating outputs. The realigned model is split in two parts, one related to the stable outputs and the other related to the integrating outputs. Model uncertainty is only related to the stable outputs, and it is characterized by a family of possible linear time-invariant models. It is assumed that the real model of the system is unknown, but at any time instant the system can be represented by one of the models of this family of models. In practice the models of the integrating outputs are less affected by uncertainty, and here, this uncertainty is neglected. In Section 3, the optimization problem that produces the robust controller is presented. Since the inputs affect both the stable and integrating outputs, the cost function of the robust MPC proposed here includes a combination of the weighted predicted errors in all the outputs. Several results concerning the robustness of the convergence and stability of the resulting closed-loop system are provided. Next, in Section 4, a simulation example, based on linear models of a process system from the oil refining industry, is used to illustrate the application of the new method and finally, in Section 5, the paper is concluded.

2. The Nonminimal State Space Model with Measured State

Consider the following discrete time-invariant model: 𝑦(𝑘)+𝑛𝑎𝑖=1𝐴𝑖𝑦(𝑘𝑖)=𝑛𝑏𝑖=1𝐵𝑖𝑢(𝑘𝑖),(1) where 𝑛𝑎 and 𝑛𝑏 are the orders of the discrete model, 𝐴𝑖𝑛𝑦×𝑛𝑦 and 𝐵𝑖𝑛𝑦×𝑛𝑢 are the coefficients that correspond to the parameters of the model, 𝑦(𝑘)𝑛𝑦 is the system output, and 𝑢(𝑘)𝑛𝑢 is the system input. Here, it assumed that in the model represented above there are no zero-pole cancellations. In Maciejowski [6], it is shown that this model can be written in a state space form where the state is built with the measurements of the plant inputs and outputs at different time instants. In González et al. [7], it is adopted the same model written in the input incremental form: 𝑥𝑦(𝑘+1)𝑥Δ𝑢(𝑘+1)=𝐴𝑦𝐴Δ𝑢0_𝐼_𝑥𝑦(𝑘)𝑥Δ𝑢(𝑘)+𝐵Δ𝑢𝐼Δ𝑢(𝑘),𝑦(𝑘)=𝐶𝑦𝐶Δ𝑢𝑥𝑦(𝑘)𝑥Δ𝑢(𝑘),(2) where 𝑥𝑦(𝑘)=𝑦(𝑘)𝑇𝑦(𝑘1)𝑇𝑦(𝑘𝑛𝑎+1)𝑇𝑦(𝑘𝑛𝑎)𝑇𝑇(𝑛𝑎+1)𝑛𝑦,𝑥Δ𝑢(𝑘)=Δ𝑢(𝑘1)𝑇Δ𝑢(𝑘2)𝑇Δ𝑢(𝑘𝑛𝑏+1)𝑇𝑇(𝑛𝑏1)𝑛𝑢,(3)𝑘 is the present sampling instant, Δ𝑢(𝑘)=𝑢(𝑘)𝑢(𝑘1) is the input increment, Δ𝑢𝑛𝑢, 𝑦𝑛𝑦 is the output. The matrices involved in this model are (see more details in Maciejowski [6])𝐴𝑦=𝐼𝐴1𝐴1𝐴2𝐴2𝐴3𝐴3𝐴4𝐴𝑛𝑎1𝐴𝑛𝑎𝐴𝑛𝑎𝐼𝑛𝑦×𝑛𝑦000000𝐼𝑛𝑦×𝑛𝑦000000𝐼𝑛𝑦×𝑛𝑦000000𝐼𝑛𝑦×𝑛𝑦000000𝐼𝑛𝑦×𝑛𝑦0,𝐴𝑦(𝑛𝑎+1)𝑛𝑦×(𝑛𝑎+1)𝑛𝑦,𝐴Δ𝑢=𝐵2𝐵𝑛𝑏0𝑛𝑦×𝑛𝑢0𝑛𝑦×𝑛𝑢0𝑛𝑦×𝑛𝑢0𝑛𝑦×𝑛𝑢0𝑛𝑦×𝑛𝑢0𝑛𝑦×𝑛𝑢,𝐴Δ𝑢(𝑛𝑎+1)𝑛𝑦×(𝑛𝑏1)𝑛𝑢,𝐼_=0𝑛𝑢0𝑛𝑢𝐼𝑛𝑢0𝑛𝑢0𝑛𝑢𝐼𝑛𝑢0𝑛𝑢,𝐼_(𝑛𝑏1)𝑛𝑢×(𝑛𝑏1)𝑛𝑢,0_(𝑛𝑏1)𝑛𝑢×(𝑛𝑎+1)𝑛𝑦,𝐵Δ𝑢=𝐵10=𝐵10𝑛𝑦×𝑛𝑢0𝑛𝑦×𝑛𝑢,𝐵1𝑛𝑦×𝑛𝑢,𝐵Δ𝑢[(𝑛𝑎+1)𝑛𝑦]×𝑛𝑢,𝐼=𝐼𝑛𝑢0𝑛𝑢0𝑛𝑢,𝐼[(𝑛𝑏1)𝑛𝑢]×𝑛𝑢,𝐶𝑦=𝐼𝑛𝑦×𝑛𝑦0𝑛𝑦×𝑛𝑦0𝑛𝑦×𝑛𝑦𝑛𝑎+1,𝐶Δ𝑢=0𝑛𝑦×𝑛𝑢0𝑛𝑦×𝑛𝑢0𝑛𝑦×𝑛𝑢𝑛𝑏1.(4)In the system defined by (2), the input and output values read from the plant are used to realign the model, which gives a better disturbance rejection capability to the controller. In González and Odloak [8], it is shown that the model presented above is detectable and stabilizable. More detailed discussions of minimal state space representations and the resulting observability and controllability of time-delayed systems can be found in De La Sen [9].

For control implementation, in order to better locate the uncertainties along the process model, it is adequate to divide the model in two separated parts. The first part of the model is related to the pure integrating outputs, and the second part is related to the stable outputs. Here, it is assumed that the process has no outputs related simultaneously with stable and integrating modes. Then, suppose that the model defined by (2) is written for the integrating outputs: 𝑥𝑖𝑦(𝑘+1)𝑥𝑖Δ𝑢(𝑘+1)𝑥𝑖(𝑘+1)=𝐴𝑖𝑦𝐴𝑖Δ𝑢0_𝐼_𝐴𝑖𝑥𝑖𝑦(𝑘)𝑥𝑖Δ𝑢(𝑘)𝑥𝑖(𝑘)+𝐵𝑖Δ𝑢𝐼𝐵𝑖Δ𝑢(𝑘),(5)𝑦𝑖(𝑘)=𝐶𝑖𝑦𝐶𝑖Δ𝑢𝑥𝑖𝑦(𝑘)𝑥𝑖Δ𝑢(𝑘).(6) Analogously, a similar model can be written for the stable outputs: 𝑥𝑠𝑦,𝜃𝑝(𝑘+1)𝑥𝑠Δ𝑢(𝑘+1)𝑥𝑠𝜃𝑝(𝑘+1)=𝐴𝑠𝑦𝜃𝑝𝐴𝑠Δ𝑢𝜃𝑝0_𝐼_𝐴𝑠𝜃𝑝𝑥𝑠𝑦,𝜃𝑝(𝑘)𝑥𝑠Δ𝑢(𝑘)𝑥𝑠𝜃𝑝(𝑘)+𝐵𝑠Δ𝑢𝜃𝑝𝐼𝐵𝑠𝜃𝑝Δ𝑢(𝑘),(7)𝑦𝑠𝜃𝑝(𝑘)=𝐶𝑠𝑦𝐶𝑠Δ𝑢𝑥𝑠𝑦,𝜃𝑝(𝑘)𝑥𝑠Δ𝑢(𝑘).(8) Observe that some of the components of the model defined in (7) and (8) depend on 𝜃𝑝, that is, the vector of parameters that define the model (𝜃𝑝=[𝐴𝑝,𝐵𝑝]). These parameters are unknown, which characterizes the uncertainty in the model related to the stable outputs. The simplest form to consider uncertainty is through the multimodel uncertainty where 𝜃𝑝 belongs to a finite set 𝜃𝑝Ω={𝜃1,,𝜃𝐿}. This means that the true model of the process is unknown, but it is one of the 𝐿 models of a finite set of known models.

Within the conventional MPC formulation, the control horizon 𝑚 is such that Δ𝑢(𝑘+𝑗)=0, 𝑗=𝑚,𝑚+1,, then, with the model representation defined in (2) and (3), it is easy to show that 𝑥Δ𝑢(𝑘+𝑛+𝑗)=0,𝑗=1,2,, where 𝑛=𝑚+𝑛𝑏1 and 𝑥𝑦(𝑘+𝑛+𝑗)=𝐴𝑦𝑥𝑦(𝑘+𝑛+𝑗1),𝑗=1,2,.

Then, after time step 𝑘+𝑛, it is possible to apply the Jordan matrix decomposition of 𝐴𝑦 as follows: 𝐴𝑦𝑉=𝑉𝐴𝑑,(9) where 𝐴𝑑 is a block diagonal matrix where the eigenvalues of 𝐴𝑦 appear in its main diagonal. If 𝐴𝑦 is a full-rank square matrix, then 𝑉 is invertible and it is possible to make a change of variables such that the integrating states related to the incremental form of the model are separated from the remaining states of the system. For instance, for the integrating system defined in (5) and (6), one can define 𝑥𝑖𝑦(𝑘)=𝑉𝑖𝑧𝑖(𝑘),𝑥𝑖𝑦(𝑘)=𝑉Δ𝑖𝑉𝑖𝑖𝑧Δ𝑖(𝑘)𝑧𝑖𝑖(𝑘),(10) where 𝑧Δ𝑖 corresponds to the integrating states related to the incremental form of the model and 𝑧𝑖𝑖 is the state associated with the real integrating states of the system. 𝑉𝑖 is obtained from the Jordan decomposition of matrix 𝐴𝑖𝑦 as follows: 𝐴𝑖𝑦𝑉𝑖=𝑉𝑖𝐴𝑖𝑑.(11) It can be shown that 𝐴𝑖𝑑=𝐼𝑛𝑦𝑖𝐼𝑛𝑦𝑖0𝐼𝑛𝑦𝑖 where 𝑛𝑦𝑖 is the number of integrating outputs.

It should be noted that, after time step 𝑘+𝑛, the transformed state of the integrating system will evolve according to the equation 𝑧Δ𝑖(𝑘+𝑛+1)𝑧𝑖𝑖(𝑘+𝑛+1)=𝐼𝑛𝑦𝐼𝑛𝑦0𝐼𝑛𝑦𝑧Δ𝑖(𝑘+𝑛)𝑧𝑖𝑖(𝑘+𝑛).(12) Analogously, for the system represented through (7) and (8), the following state transformation can be defined: 𝑥𝑠𝑦,𝜃𝑝(𝑘)=𝑉𝑠,𝜃𝑝𝑧𝑠𝜃𝑝(𝑘)=𝑉Δ𝑠,𝜃𝑝𝑉𝑠𝑠,𝜃𝑝𝑧Δ𝑠,𝜃𝑝(𝑘)𝑧𝑠𝑠,𝜃𝑝(𝑘),(13) where 𝑧Δ𝑠,𝜃𝑝 corresponds to the integrating states related to the incremental form of the model and 𝑧𝑠𝑠,𝜃𝑝 is the state associated with the stable states of the system corresponding to the parameters represented as 𝜃𝑝. 𝑉𝑠,𝜃𝑝 is obtained from the Jordan decomposition of matrix 𝐴𝑠𝑦(𝜃𝑝) as follows: 𝐴𝑠𝑦𝜃𝑝𝑉𝑠,𝜃𝑝=𝑉𝑠,𝜃𝑝𝐴𝑠𝑑,𝜃𝑝.(14) It can be shown that 𝐴𝑠𝑑,𝜃𝑝=𝐼𝑛𝑦𝑠00𝐹𝑠𝜃𝑝(15) where 𝑛𝑦𝑠 is the number of stable outputs of the system. If the stable poles of the system are nonrepeated, 𝐹𝑠𝜃𝑝 is a diagonal matrix containing the eigenvalues of 𝐴𝑠𝑦(𝜃𝑝).

After time step 𝑘+𝑛, the transformed state corresponding to the stable outputs will evolve according to the equation 𝑧Δ𝑠,𝜃𝑝(𝑘+𝑛+1)𝑧𝑠𝑠,𝜃𝑝(𝑘+𝑛+1)=𝐼𝑛𝑦𝑠00𝐹𝑠𝜃𝑝𝑧Δ𝑠,𝜃𝑝(𝑘+𝑛)𝑧𝑠𝑠,𝜃𝑝(𝑘+𝑛).(16)

3. The Robust MPC with Output Feedback

The system outputs need to be controlled through the manipulation of the nu inputs. Then, the robust MPC proposed here is based on the following objective function: 𝐽𝑘𝜃𝑝=𝑛𝑗=1𝑦𝑖(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘𝑗𝛿𝑖𝑖,𝑘𝑇×𝑄𝑖𝑦𝑖(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘𝑗𝛿𝑖𝑖,𝑘+𝑗=𝑛+1𝑦𝑖(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘𝑗𝛿𝑖𝑖,𝑘𝑇×𝑄𝑖𝑦𝑖(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘𝑗𝛿𝑖𝑖,𝑘+𝑛𝑗=1𝑦𝑠𝜃𝑝(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘𝜃𝑝𝑇×𝑄𝑠𝑦𝑠𝜃𝑝(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘𝜃𝑝+𝑗=𝑛+1𝑦𝑠𝜃𝑝(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘𝜃𝑝𝑇×𝑄𝑠𝑦𝑠𝜃𝑝(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘𝜃𝑝+𝑚1𝑗=0Δ𝑢(𝑘+𝑗𝑘)𝑇𝑅Δ𝑢(𝑘+𝑗𝑘)+𝛿𝑖Δ,𝑘𝑇𝑆𝑖Δ𝛿𝑖Δ,𝑘+𝛿𝑠Δ,𝑘𝜃𝑝𝑇𝑆𝑠Δ𝛿𝑠Δ,𝑘𝜃𝑝+𝛿𝑖𝑖,𝑘𝑇𝑆𝑖𝑖𝛿𝑖𝑖,𝑘,(17) where 𝑦𝑖(𝑘+𝑗𝑘) is prediction of integrating output at time 𝑘+𝑗 computed at time 𝑘 and 𝑦𝑠𝜃𝑝(𝑘+𝑗𝑘) is the prediction of the stable output corresponding to model 𝜃𝑝; 𝛿𝑖Δ,𝑘, 𝛿𝑠Δ,𝑘, and 𝛿𝑖𝑖,𝑘 are slack variables that are included in the control problem to guarantee that 𝐽𝑘(𝜃𝑝) is bounded. 𝑄𝑖, 𝑄𝑠, 𝑅, 𝑆𝑖Δ, 𝑆𝑠Δ and 𝑆𝑖𝑖 are positive weighting matrices. In the second term on the right-hand side of (17), the prediction of the integrating output can be written as follows: 𝑦𝑖(𝑘+𝑛+𝑗𝑘)=𝐶𝑖𝑦𝑥𝑖𝑦(𝑘+𝑛+𝑗𝑘)+𝐶𝑖Δ𝑢𝑥𝑖Δ𝑢(𝑘+𝑛+𝑗𝑘).(18) But, since 𝑥𝑖Δ𝑢(𝑘+𝑛+𝑗𝑘)=0, (18) becomes 𝑦𝑖(𝑘+𝑛+𝑗𝑘)=𝐶𝑖𝑦𝑥𝑖𝑦(𝑘+𝑛+𝑗𝑘).(19) Then, using (10) and (12), (19) can be written as follows: 𝑦𝑖(𝑘+𝑛+𝑗𝑘)=𝐶𝑖𝑦𝑉Δ𝑖𝑧Δ𝑖(𝑘+𝑛𝑘)+𝑗𝑧𝑖𝑖(𝑘+𝑛𝑘)+𝐶𝑖𝑦𝑉𝑖𝑖𝑧𝑖𝑖(𝑘+𝑛𝑘).(20) If (20) is substituted into the infinite sum represented by the second term on the right-hand side of (17), it is easy to see that the objective function defined in (17) will be bounded only if the following constraints are satisfied: 𝐶𝑖𝑦𝑉Δ𝑖𝑧Δ𝑖(𝑘+𝑛𝑘)+𝐶𝑖𝑦𝑉𝑖𝑖𝑧𝑖𝑖(𝑘+𝑛𝑘)𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘𝑛𝛿𝑖𝑖,𝑘=0,(21)𝐶𝑖𝑦𝑉Δ𝑖𝑧𝑖𝑖(𝑘+𝑛𝑘)𝛿𝑖𝑖,𝑘=0.(22) Analogously, in the infinite sum represented by the fourth term on the right-hand side of (17), the prediction of the stable output can be written as follows: 𝑦𝑠𝜃𝑝(𝑘+𝑛+𝑗𝑘)=𝐶𝑠𝑦𝑥𝑠𝑦,𝜃𝑝(𝑘+𝑛+𝑗𝑘).(23) Now, using (13) and (16), equation (23) can be written as follows: 𝑦𝑠𝜃𝑝(𝑘+𝑛+𝑗𝑘)=𝐶𝑠𝑦𝑉Δ𝑠,𝜃𝑝𝑧Δ𝑦,𝜃𝑝(𝑘+𝑛𝑘)+𝐶𝑠𝑦𝑉𝑠𝑠,𝜃𝑝𝐹𝑠𝜃𝑝𝑗𝑧𝑠𝑦,𝜃𝑝(𝑘+𝑛𝑘).(24) If the expression for the output prediction obtained in (24) is substituted into the infinite sum represented by the fourth term on the right-hand side of (17), it is easy to see that the cost function will be bounded only if the following constraint is satisfied: 𝐶𝑠𝑦𝑉Δ𝑠,𝜃𝑝𝑧Δ𝑦,𝜃𝑝(𝑘+𝑛𝑘)𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘𝜃𝑝=0.(25) Consequently, the optimization problem that produces the control law proposed here will have to include the constraints defined in (21), (22), and (25).

Equation (21) can be developed in terms of the available states at time 𝑘 and the vector of the future control moves as follows: 𝐶𝑖𝑦𝑉Δ𝑖𝑁𝑖1+𝐶𝑖𝑦𝑉𝑖𝑖𝑁𝑖2𝑉1𝑖𝑁𝑖𝑥𝑖(𝑘+𝑛𝑘)𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘𝑛𝛿𝑖𝑖,𝑘=0,(26) where 𝑁𝑖1 is a matrix of ones and zeros that collects component 𝑧Δ𝑖 from 𝑧𝑖 and matrix 𝑁𝑖2 collects component 𝑧𝑖𝑖 from 𝑧𝑖. The model defined in (5) can be used to represent (26) in terms of the integrating state that at time 𝑘, is measured as follows: 𝐶𝑖𝑦𝑉Δ𝑖𝑁𝑖1+𝐶𝑖𝑦𝑉𝑖𝑖𝑁𝑖2𝑉1𝑖𝑁𝑖𝐴𝑖𝑛𝑚𝑥𝑖(𝑘)+𝐵𝑖𝑚Δ𝑢𝑘𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘𝑛𝛿𝑖𝑖,𝑘=0,(27) where Δ𝑢𝑘=[Δ𝑢(𝑘𝑘)𝑇Δ𝑢(𝑘+𝑚1𝑘)𝑇]𝑇 and 𝐵𝑖𝑚=[(𝐴𝑖)𝑛1𝐵𝑖(𝐴𝑖)𝑛𝑚𝐵𝑖].

The constraint (22) can also be represented in terms of the present state and the vector of future control moves: 𝐶𝑖𝑦𝑉Δ𝑖𝑁𝑖1𝑉1𝑖𝑁𝑖𝐴𝑖𝑛𝑚𝑥𝑖(𝑘)+𝐵𝑖𝑚Δ𝑢𝑘𝛿𝑖𝑖,𝑘=0.(28) Analogously, (25) can be written as follows: 𝐶𝑠𝑦𝑉Δ𝑠,𝜃𝑝𝑁𝑠1𝑉1𝑠,𝜃𝑝𝑁𝑠2𝐴𝑠𝜃𝑝𝑛𝑚𝑥𝑠𝜃𝑝(𝑘)+𝐵𝑠𝑚𝜃𝑝Δ𝑢𝑘𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘𝜃𝑝=0,(29) where 𝑁𝑠1 captures the state component 𝑧Δ𝑠,𝜃𝑝 of state 𝑧𝑠𝜃𝑝 and 𝑁𝑠2 captures the component 𝑥𝑠𝑦,𝜃𝑝 of state 𝑥𝑠𝜃𝑝. With the constraints discussed above, the control cost function defined in (17) can be written as follows: 𝐽𝑘𝜃𝑝=𝑛𝑗=1𝑦𝑖(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘𝑗𝛿𝑖𝑖,𝑘𝑇×𝑄𝑖𝑦𝑖(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘𝑗𝛿𝑖𝑖,𝑘+𝑛𝑗=1𝑦𝑠𝜃𝑝(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘𝜃𝑝𝑇×𝑄𝑠𝑦𝑠𝜃𝑝(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘𝜃𝑝+𝑥𝑠𝜃𝑝(𝑘+𝑛𝑘)𝑇(𝑁𝑠)𝑇𝑉1𝑠,𝜃𝑝𝑇𝑁𝑠3𝑇×𝑄𝑠𝑁𝑠3𝑉1𝑠,𝜃𝑝𝑁𝑠𝑥𝑠𝜃𝑝(𝑘+𝑛𝑘)+𝑚1𝑗=0Δ𝑢(𝑘+𝑗𝑘)𝑇𝑅Δ𝑢(𝑘+𝑗𝑘)+𝛿𝑖Δ,𝑘𝑇𝑆𝑖Δ𝛿𝑖Δ,𝑘+𝛿𝑠Δ,𝑘𝜃𝑝𝑇𝑆𝑠Δ𝛿𝑠Δ,𝑘𝜃𝑝+𝛿𝑖𝑖,𝑘𝑇𝑆𝑖𝑖𝛿𝑖𝑖,𝑘,(30) where 𝑄𝑠 is obtained from the solution to the following equation: 𝑄𝑠𝐹𝑠𝜃𝑝𝑇𝑄𝑠𝐹𝑠𝜃𝑝=𝐹𝑠𝜃𝑝𝑇𝑉𝑠𝑠,𝜃𝑝𝑇𝐶𝑠𝑦𝑇𝑄𝑠𝐶𝑠𝑦𝑉𝑠𝑠,𝜃𝑝𝐹𝑠𝜃𝑝.(31) Observe that (30) is a finite horizon cost function with a terminal weight computed through (31).

One can now define the robust MPC with output feedback for systems with integrating and stable outputs. The controller is robust in a sense that it maintains stability even in the presence of uncertainty in the model related to the stable outputs. Assuming that 𝜃𝑁 corresponds to the nominal or most probable model, the proposed controller is based on the solution to the following problem.

Problem 1. Consider the following objective function: minΔ𝑢𝑘,𝛿𝑖𝑖,𝑘,𝛿𝑖Δ,𝑘,𝛿𝑠Δ,𝑘(𝜃𝑝)𝑝=1,...,𝐿𝐽𝑘𝜃𝑁(32) subject to (26), (28) and 𝐶𝑠𝑦𝑉Δ𝑠,𝜃𝑝𝑁𝑠1𝑉1𝑠,𝜃𝑝𝑁𝑠2𝐴𝑠𝜃𝑝𝑛𝑚𝑥𝑠𝜃𝑝(𝑘)+𝐵𝑠𝑚𝜃𝑝Δ𝑢𝑘𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘𝜃𝑝=0,𝑝=1,,𝐿Δ𝑢maxΔ𝑢(𝑘+𝑗𝑘)Δ𝑢max,𝑗=0,,𝑚1𝑢min𝑢(𝑘+𝑗𝑘)𝑢max,𝑗=0,,𝑚1𝐽𝑘𝜃𝑝̃𝐽𝑘𝜃𝑝,𝑝=1,,𝐿,(33)𝛿𝑖𝑖,𝑘̃𝛿𝑖𝑖,𝑘,(34) where ̃𝐽𝑘(𝜃𝑝) is computed with Δ̃𝑢𝑘=Δ𝑢(𝑘𝑘1)Δ𝑢(𝑘+1𝑘1)Δ𝑢(𝑘+𝑚2𝑘1)0,(35)where Δ𝑢(𝑘+𝑗𝑘1) corresponds to the optimal solution to Problem 1 at time step 𝑘1 and ̃𝛿𝑖𝑖,𝑘,̃𝛿𝑖Δ,𝑘,̃𝛿𝑠Δ,𝑘(𝜃𝑝),𝑝=1,,𝐿, which are obtained from the solution to the following equations: 𝐶𝑠𝑦𝑉Δ𝑠,𝜃𝑝𝑁𝑠1𝑉1𝑠,𝜃𝑝𝑁𝑠2𝐴𝑠𝜃𝑝𝑛𝑚𝑥𝑠𝜃𝑝(𝑘)+𝐵𝑠𝑚𝜃𝑝Δ̃𝑢𝑘𝑦𝑠𝑝𝑠̃𝛿𝑠Δ,𝑘𝜃𝑝=0,𝑝=1,,𝐿,𝐶𝑖𝑦𝑉Δ𝑖𝑁𝑖1+𝐶𝑖𝑦𝑉𝑖𝑖𝑁𝑖2𝑉1𝑖𝑁𝑖𝐴𝑖𝑛𝑚𝑥𝑖(𝑘)+𝐵𝑖𝑚Δ̃𝑢𝑘𝑦𝑠𝑝𝑖̃𝛿𝑖Δ,𝑘𝑛̃𝛿𝑖𝑖,𝑘=0,𝐶𝑖𝑦𝑉Δ𝑖𝑁𝑖1𝑉1𝑖𝑁𝑖𝐴𝑖𝑛𝑚𝑥𝑖(𝑘)+𝐵𝑖𝑚Δ̃𝑢𝑘̃𝛿𝑖𝑖,𝑘=0.(36)

Remark 2. The constraint defined in (33) corresponds to the conventional cost contracting constraint first proposed in Badgwell [10] in the development of a robust MPC for the regulator problem and extended in Odloak [11] for the output tracking problem. In these works, it is shown that, for open-loop stable systems with multiplant uncertainty, the inclusion of constraint (33) forces the cost function of the true plant to be a Lyapunov function for the closed-loop system.

Remark 3. The constraint defined in (34) is intended to force slack 𝛿𝑖𝑖,𝑘 to converge to zero. It will be shown latter that if Problem 1 is feasible with this slack kept equal to zero, the control cost function of the true plant becomes a Lyapunov function for the uncertain system and the controller resulting from the solution to Problem 1. However, if there is uncertainty on the model related to the integrating part of the system, which means that there is uncertainty in matrices 𝐴𝑖 and/or 𝐵𝑖, then, it is easy to show that it is not possible to find a solution to Problem 1 that satisfies constraint (28) for the different matrices 𝐴𝑖 and 𝐵𝑖 with 𝛿𝑖𝑖,𝑘(𝜃𝑝)=0,𝑝=1,,𝐿. This means that the approach followed here cannot be applied for systems with uncertainty in the gains of the integrating outputs. In the next theorems we prove the robust stability of the controller obtained through the solution to Problem 1.

The proof of the stability of an MPC usually involves two ingredients: recursive feasibility of the control problem and the existence of a Lyapnvov function for the closed-loop system. The theorem below shows the recursive feasibility of Problem 1, and the other two theorems show that the cost function can be interpreted as a Lyapunov function for the system.

Theorem 4. If Problem 1 has a feasible solution at time step 𝑘, it will remain feasible at any subsequent time step 𝑘+𝑗.

Proof. Suppose that at time 𝑘 the optimum solution is represented as follows: Δ𝑢𝑘,𝛿𝑖𝑖,𝑘,𝛿𝑖Δ,𝑘,𝛿𝑠Δ,𝑘𝜃𝑝,𝑝=1,,𝐿.(37) Then, it is easy to show that, at time step 𝑘+1, the following solution, inherited from the optimum solution at time 𝑘, will be feasible:Δ𝑢𝑘+1=Δ𝑢(𝑘+1𝑘)Δ𝑢(𝑘+2𝑘1)Δ𝑢(𝑘+𝑚1𝑘)0,𝛿𝑖𝑖,𝑘+1=̃𝛿𝑖𝑖,𝑘+1,𝛿𝑖Δ,𝑘+1=̃𝛿𝑖Δ,𝑘,𝛿𝑠Δ,𝑘+1𝜃𝑝=̃𝛿𝑠Δ,𝑘𝜃𝑝,𝑝=1,,𝐿,(38)where ̃𝛿𝑖𝑖,𝑘+1,̃𝛿𝑖Δ,𝑘+1,̃𝛿𝑠Δ,𝑘+1(𝜃𝑝),𝑝=1,...,𝐿 are computed as defined in the enunciate of the theorem and consequently, one can always find a solution to Problem 1.

Since Problem 1 has recursive feasibility, the next theorem shows that the consideration of constraint (34) in the control problem guarantees that, if the system is not disturbed, 𝛿𝑖𝑖,𝑘 that is the slack related to the integrating outputs will converge to zero.

Theorem 5. If the system defined in (5), (7), (6), and (8) is stabilizable, the solution to Problem 1 at successive time steps drives the slacks 𝛿𝑖𝑖,𝑘 related to the integrating outputs to zero.

Proof. Suppose that, at time step 𝑘, Problem 1 is solved and the solution defined in (37) is obtained. Then, since it is assumed that there is no uncertainty in the model related to the integrating outputs, it is easy to see that for the undisturbed system ̃𝛿𝑖Δ,𝑘+1=𝛿𝑖Δ,𝑘, and consequently, 𝛿𝑖Δ,𝑘+1𝛿𝑖Δ,𝑘. Then, if the system is stabilizable, 𝛿𝑖Δ,𝑘 will be driven to zero.

Theorem 6. Consider the system represented in (5), (6), (7), and (8). If the conditions defined in Theorems 4 and 5 are satisfied and the desired steady state is reachable, then, the solution of Problem 1 at successive time steps will produce a sequence of control moves that will drive the system outputs to the desired targets.

Proof. Suppose that the convergence of 𝛿𝑖𝑖,𝑘 to zero has already been achieved and Problem 1 is solved at time step 𝑘. Let us designate the resulting optimum solution as Δ𝑢𝑘,𝛿𝑖𝑖,𝑘=0,𝛿𝑖Δ,𝑘,𝛿𝑠Δ,𝑘𝜃𝑝,𝑝=1,,𝐿.(39) Let us now define the set of parameters corresponding to the true model as 𝜃𝑇. The resulting optimum control move Δ𝑢(𝑘𝑘) is implemented in the true system and we move to time instant 𝑘+1 where Problem 1 is solved again. At this time step, as shown in Theorem 4, Δ̃𝑢𝑘+1,̃𝛿𝑖𝑖,𝑘+1,̃𝛿𝑖Δ,𝑘+1,̃𝛿𝑠Δ,𝑘+1(𝜃𝑝),𝑝=1,,𝐿 is a feasible solution to Problem 1. Also, since ̃𝛿𝑖𝑖,𝑘+1=𝛿𝑖𝑖,𝑘=0, it is easy to show that ̃𝛿𝑖Δ,𝑘+1=𝛿𝑖Δ,𝑘,̃𝛿𝑠Δ,𝑘+1(𝜃𝑇)=𝛿𝑠Δ,𝑘(𝜃𝑇). Thus, it is easy to see that ̃𝐽𝑘+1(𝜃𝑇)=𝐽𝑘(𝜃𝑇), and consequently, 𝐽𝑘+1(𝜃𝑇)𝐽𝑘(𝜃𝑇). Now, since 𝐽𝑘(𝜃𝑇) is positive and bounded below by zero, it can be interpreted as a Lyapunov function for the closed-loop system with the proposed controller. This means that 𝑦𝑖(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑖𝛿𝑖Δ,𝑘 and 𝑦𝑠𝜃𝑇(𝑘+𝑗𝑘)𝑦𝑠𝑝𝑠𝛿𝑠Δ,𝑘(𝜃𝑇) will converge to zero as 𝑗. Following the same steps as in González et al. [12], it can be shown that if weights 𝑆𝑖Δ and 𝑆𝑠Δ are large enough in comparison to 𝑅 and the desired set point is reachable, slacks 𝛿𝑖Δ,𝑘 and 𝛿𝑠Δ,𝑘(𝜃𝑇) will converge to zero and the outputs will converge to the set points. If the targets are not reachable, the distance to the targets will be minimized according to the norm (𝛿𝑖Δ,𝑘)𝑇𝑆𝑖Δ𝛿𝑖Δ,𝑘+(𝛿𝑠Δ,𝑘(𝜃𝑇))𝑇𝑆𝑠Δ𝛿𝑠Δ,𝑘(𝜃𝑇).

4. Simulation Example: The Distillation Column System

The proposed control strategy was tested with a small dimension system of the process industry. The system is part of a distillation column where isobutene is separated from n-butane in the alkylation unit of an oil refinery. This system was studied in Carrapiço et al. [13] that implemented an infinite horizon MPC to control the industrial distillation column. The controlled outputs are the level of liquid in the overhead drum (𝑦1 %) and the temperature at tray number 68 (𝑦2, °C). The manipulated variables are the steam flow rate to the reboiler (𝑢1, t/h), the distillate flow rate (𝑢2, m3/d), and feed temperature deviation (𝑢3, °C). Figure 1 shows schematically the structure of the existing regulatory loops of the distillation column.

From practical tests, for a sampling period 𝑇=1 min, the following two models were obtained at different operating conditions: 𝐺1(𝑧)=2.3𝑧41𝑧10.7×103𝑧31𝑧10.2𝑧51𝑧10.4604𝑧810.902𝑧10.1915×103𝑧310.8632𝑧10.03304𝑧410.9174𝑧1(40)𝐺2(𝑧)=2.3𝑧41𝑧10.7×103𝑧31𝑧10.2𝑧51𝑧10.1965𝑧410.9146𝑧10.3215×103𝑧210.8852𝑧10.03332𝑧610.9306𝑧1(41) We observe that 𝑦1 is integrating with respect to all the inputs while 𝑦2 is stable. We should notice that there is uncertainty of about 20% in time constants and uncertainty of about 50% in process gains of the stable part of the model. Uncertainty in the dead times is also present in the model.

In the set point tracking problem simulated here, the desired value of the liquid level in the overhead drum (𝑦1) was reduced by 1% and the desired value of the temperature at stage #68 (𝑦2) was increased by 1°C. The tuning parameters of the controller that were used in the simulations included here are the following: 𝑚=2, 𝑄𝑖=1, 𝑄𝑠=1, 𝑅=diag([0.10.110]), 𝑆𝑖Δ=1×103, 𝑆𝑠Δ=1×103, and 𝑆𝑖𝑖=1×103. Related to the values of the variables at the initial steady state, the input limits are 𝑢max=[1040010], 𝑢min=[1040010], and Δ𝑢max=[0.1500.01]. The nominal model is represented by model 𝐺1(𝑠) defined in (40), and the true plant can be either 𝐺1(𝑠) or 𝐺2(𝑠) defined in (41). In this work, IHMPC represents the same controller as the one presented in Carrapiço et al. [13], which does not consider the uncertainty in the process model. In the first simulation, it is considered the case in which the true plant has the same model as the nominal plant. Figures 2 and 3 show the system responses for the nominally stable IHMPC proposed in Carrapiço et al. [13] and for the robust controller resulting from the solution to Problem 1 and designated here as IHRMPC. The two controllers have the same tuning parameters. One can see from Figures 2 and 3 that the nominal IHMPC and the robust IHRMPC perform similarly. This is quite surprising because it is widespread in the literature that the robust MPC should have a more conservative behavior than the controller based on the true model as this controller also takes into account the output predictions of model 𝐺2(𝑠) that is quite different from the true model of the plant. This conservative behavior would produce a slower response, which is not observed in the simulation results obtained here. In the second simulation case, model 𝐺2(𝑠) represents the true plant. The IHMPC is still based on model 𝐺1(𝑠) while the proposed IHRMPC contains models 𝐺1(𝑠) and 𝐺2(𝑠). Figures 4 and 5 show the responses of the distillation system with each controller. We can see that the robust controller stabilizes the true model and the performance of the controller is acceptable. However, the IHMPC based only on the nominal model becomes unstable, which indicates that the controller based only on model 𝐺1(𝑠) cannot be used to control the distillation column at the operating point where model 𝐺2(𝑠) represents the true process.

5. Conclusion

In this paper, it was presented a new version of the robust infinitive horizon MPC with output feedback for systems with stable and integrating outputs. The adopted model formulation precludes the need to include a state observer, and the computer burden to run the controller may be reduced. To accommodate uncertainty in the process model, the state space model was built in two separate parts: one part is related to the integrating outputs and the other is related to the stable outputs. With this approach, it was possible to include the multiplant uncertainty in the model of stable outputs. The resulting optimization problem is a convex nonlinear program that can be easily solved with the available NLP packages. A simulation example shows that the implementation of the developed approach at real industrial systems may be achieved at least for systems of small to medium dimension.