Abstract
In this work, it is presented a new contribution to the design of a robust MPC with output feedback, input constraints, and uncertain model. Multivariable predictive controllers have been used in industry to reduce the variability of the process output and to allow the operation of the system near to the constraints, where it is usually located the optimum operating point. For this reason, new controllers have been developed with the objective of achieving better performance, simpler control structure, and robustness with respect to model uncertainty. In this work, it is proposed a model predictive controller based on a nonminimal state space model where the state is perfectly known. It is an infinite prediction horizon controller, and it is assumed that there is uncertainty in the stable part of the model, which may also include integrating modes that are frequently present in the process plants. The method is illustrated with a simulation example of the process industry using linear models based on a real process.
1. Introduction
Model predictive control has achieved a remarkable popularity in the process industry with thousands of practical applications [1]. One of the reasons for this industrial acceptance is the ability of MPC to incorporate constraints in the control problem. However, one additional desirable characteristic, still not attended by commercial MPC packages, is closed-loop stability in the presence of model uncertainty, which is usually related to the nonlinearity of the real system. When model uncertainty is considered in the synthesis of the model predictive control, the majority of existing robust algorithms usually demand a computer effort that is prohibitive for practical implementation [2]. From the application viewpoint, we could not find in the control literature a satisfactory solution to the robust MPC problem with output feedback and input constraints. In Rodrigues and Odloak [3], it is presented a formulation to the robust unconstrained MPC with output feedback where stability is achieved through the explicit inclusion of a Lyapunov inequality into the control optimization problem. Later, in Rodrigues and Odloak [4] the method was extended to allow the switching of active input constraints during transient conditions. In Perez [5] the non-minimal state space model (realigned model) proposed in Maciejowski [6] was extended to the incremental form, and this model was used in the controller proposed in Rodrigues and Odloak [4] to produce a robust controller with reduced computational effort by avoiding the use of a state observer, since the state of the realigned model is directly read from the plant. In this approach, the MPC solves the optimization problem that produces the control law in two stages. The first stage is solved offline and computes a family of linear controllers in which the Lyapunov inequalities are added to the control problem so as to ensure state contraction (stability). These linear controllers match all possible configurations of saturation of manipulated variables for a given set of variables that need to be controlled. The second stage is solved online and computes a convex combination of these linear controllers through an optimization problem that includes all the input bounds. These combinations span all possible control configurations in terms of controlled outputs and manipulated inputs. For each control configuration, the linear controller is robust to all process models defining the multiplant uncertainty. The main disadvantage of this approach is that the number of possible control configurations corresponding to the possible combinations of saturated inputs may become very high in systems with large dimension. Then, the off-line step of the control problem may be computationally very expensive or there may not be a feasible solution. The realigned model was applied in González et al. [7] to the development of an infinite horizon MPC nominally stable to open-loop stable and integrating systems. The same non-minimal model was applied in González and Odloak [8] to the development of a robust MPC to uncertain stable systems.
The main objective of this work is to extend the controller proposed in González and Odloak [8] to consider systems with stable and integrating outputs. The method proposed here is based on the non-minimal state space model formulation presented in Maciejowski [6], and the control problem considers that one may have uncertainties in the stable part of the model, while the model for the integrating part is perfectly known. In Section 2, the non-minimal (realigned) model in the incremental form is developed for systems with stable and integrating outputs. The realigned model is split in two parts, one related to the stable outputs and the other related to the integrating outputs. Model uncertainty is only related to the stable outputs, and it is characterized by a family of possible linear time-invariant models. It is assumed that the real model of the system is unknown, but at any time instant the system can be represented by one of the models of this family of models. In practice the models of the integrating outputs are less affected by uncertainty, and here, this uncertainty is neglected. In Section 3, the optimization problem that produces the robust controller is presented. Since the inputs affect both the stable and integrating outputs, the cost function of the robust MPC proposed here includes a combination of the weighted predicted errors in all the outputs. Several results concerning the robustness of the convergence and stability of the resulting closed-loop system are provided. Next, in Section 4, a simulation example, based on linear models of a process system from the oil refining industry, is used to illustrate the application of the new method and finally, in Section 5, the paper is concluded.
2. The Nonminimal State Space Model with Measured State
Consider the following discrete time-invariant model: where and are the orders of the discrete model, and are the coefficients that correspond to the parameters of the model, is the system output, and is the system input. Here, it assumed that in the model represented above there are no zero-pole cancellations. In Maciejowski [6], it is shown that this model can be written in a state space form where the state is built with the measurements of the plant inputs and outputs at different time instants. In González et al. [7], it is adopted the same model written in the input incremental form: where is the present sampling instant, is the input increment, , is the output. The matrices involved in this model are (see more details in Maciejowski [6])In the system defined by (2), the input and output values read from the plant are used to realign the model, which gives a better disturbance rejection capability to the controller. In González and Odloak [8], it is shown that the model presented above is detectable and stabilizable. More detailed discussions of minimal state space representations and the resulting observability and controllability of time-delayed systems can be found in De La Sen [9].
For control implementation, in order to better locate the uncertainties along the process model, it is adequate to divide the model in two separated parts. The first part of the model is related to the pure integrating outputs, and the second part is related to the stable outputs. Here, it is assumed that the process has no outputs related simultaneously with stable and integrating modes. Then, suppose that the model defined by (2) is written for the integrating outputs: Analogously, a similar model can be written for the stable outputs: Observe that some of the components of the model defined in (7) and (8) depend on , that is, the vector of parameters that define the model (). These parameters are unknown, which characterizes the uncertainty in the model related to the stable outputs. The simplest form to consider uncertainty is through the multimodel uncertainty where belongs to a finite set . This means that the true model of the process is unknown, but it is one of the models of a finite set of known models.
Within the conventional MPC formulation, the control horizon is such that , , then, with the model representation defined in (2) and (3), it is easy to show that , where and .
Then, after time step , it is possible to apply the Jordan matrix decomposition of as follows: where is a block diagonal matrix where the eigenvalues of appear in its main diagonal. If is a full-rank square matrix, then is invertible and it is possible to make a change of variables such that the integrating states related to the incremental form of the model are separated from the remaining states of the system. For instance, for the integrating system defined in (5) and (6), one can define where corresponds to the integrating states related to the incremental form of the model and is the state associated with the real integrating states of the system. is obtained from the Jordan decomposition of matrix as follows: It can be shown that where is the number of integrating outputs.
It should be noted that, after time step , the transformed state of the integrating system will evolve according to the equation Analogously, for the system represented through (7) and (8), the following state transformation can be defined: where corresponds to the integrating states related to the incremental form of the model and is the state associated with the stable states of the system corresponding to the parameters represented as . is obtained from the Jordan decomposition of matrix as follows: It can be shown that where is the number of stable outputs of the system. If the stable poles of the system are nonrepeated, is a diagonal matrix containing the eigenvalues of .
After time step , the transformed state corresponding to the stable outputs will evolve according to the equation
3. The Robust MPC with Output Feedback
The system outputs need to be controlled through the manipulation of the nu inputs. Then, the robust MPC proposed here is based on the following objective function: where is prediction of integrating output at time computed at time and is the prediction of the stable output corresponding to model ; , , and are slack variables that are included in the control problem to guarantee that is bounded. , , , , and are positive weighting matrices. In the second term on the right-hand side of (17), the prediction of the integrating output can be written as follows: But, since , (18) becomes Then, using (10) and (12), (19) can be written as follows: If (20) is substituted into the infinite sum represented by the second term on the right-hand side of (17), it is easy to see that the objective function defined in (17) will be bounded only if the following constraints are satisfied: Analogously, in the infinite sum represented by the fourth term on the right-hand side of (17), the prediction of the stable output can be written as follows: Now, using (13) and (16), equation (23) can be written as follows: If the expression for the output prediction obtained in (24) is substituted into the infinite sum represented by the fourth term on the right-hand side of (17), it is easy to see that the cost function will be bounded only if the following constraint is satisfied: Consequently, the optimization problem that produces the control law proposed here will have to include the constraints defined in (21), (22), and (25).
Equation (21) can be developed in terms of the available states at time and the vector of the future control moves as follows: where is a matrix of ones and zeros that collects component from and matrix collects component from . The model defined in (5) can be used to represent (26) in terms of the integrating state that at time , is measured as follows: where and .
The constraint (22) can also be represented in terms of the present state and the vector of future control moves: Analogously, (25) can be written as follows: where captures the state component of state and captures the component of state . With the constraints discussed above, the control cost function defined in (17) can be written as follows: where is obtained from the solution to the following equation: Observe that (30) is a finite horizon cost function with a terminal weight computed through (31).
One can now define the robust MPC with output feedback for systems with integrating and stable outputs. The controller is robust in a sense that it maintains stability even in the presence of uncertainty in the model related to the stable outputs. Assuming that corresponds to the nominal or most probable model, the proposed controller is based on the solution to the following problem.
Problem 1. Consider the following objective function: subject to (26), (28) and where is computed with where corresponds to the optimal solution to Problem 1 at time step and , which are obtained from the solution to the following equations:
Remark 2. The constraint defined in (33) corresponds to the conventional cost contracting constraint first proposed in Badgwell [10] in the development of a robust MPC for the regulator problem and extended in Odloak [11] for the output tracking problem. In these works, it is shown that, for open-loop stable systems with multiplant uncertainty, the inclusion of constraint (33) forces the cost function of the true plant to be a Lyapunov function for the closed-loop system.
Remark 3. The constraint defined in (34) is intended to force slack to converge to zero. It will be shown latter that if Problem 1 is feasible with this slack kept equal to zero, the control cost function of the true plant becomes a Lyapunov function for the uncertain system and the controller resulting from the solution to Problem 1. However, if there is uncertainty on the model related to the integrating part of the system, which means that there is uncertainty in matrices and/or , then, it is easy to show that it is not possible to find a solution to Problem 1 that satisfies constraint (28) for the different matrices and with . This means that the approach followed here cannot be applied for systems with uncertainty in the gains of the integrating outputs. In the next theorems we prove the robust stability of the controller obtained through the solution to Problem 1.
The proof of the stability of an MPC usually involves two ingredients: recursive feasibility of the control problem and the existence of a Lyapnvov function for the closed-loop system. The theorem below shows the recursive feasibility of Problem 1, and the other two theorems show that the cost function can be interpreted as a Lyapunov function for the system.
Theorem 4. If Problem 1 has a feasible solution at time step , it will remain feasible at any subsequent time step .
Proof. Suppose that at time the optimum solution is represented as follows: Then, it is easy to show that, at time step , the following solution, inherited from the optimum solution at time , will be feasible:where are computed as defined in the enunciate of the theorem and consequently, one can always find a solution to Problem 1.
Since Problem 1 has recursive feasibility, the next theorem shows that the consideration of constraint (34) in the control problem guarantees that, if the system is not disturbed, that is the slack related to the integrating outputs will converge to zero.
Theorem 5. If the system defined in (5), (7), (6), and (8) is stabilizable, the solution to Problem 1 at successive time steps drives the slacks related to the integrating outputs to zero.
Proof. Suppose that, at time step , Problem 1 is solved and the solution defined in (37) is obtained. Then, since it is assumed that there is no uncertainty in the model related to the integrating outputs, it is easy to see that for the undisturbed system , and consequently, . Then, if the system is stabilizable, will be driven to zero.
Theorem 6. Consider the system represented in (5), (6), (7), and (8). If the conditions defined in Theorems 4 and 5 are satisfied and the desired steady state is reachable, then, the solution of Problem 1 at successive time steps will produce a sequence of control moves that will drive the system outputs to the desired targets.
Proof. Suppose that the convergence of to zero has already been achieved and Problem 1 is solved at time step . Let us designate the resulting optimum solution as Let us now define the set of parameters corresponding to the true model as . The resulting optimum control move is implemented in the true system and we move to time instant where Problem 1 is solved again. At this time step, as shown in Theorem 4, is a feasible solution to Problem 1. Also, since , it is easy to show that . Thus, it is easy to see that , and consequently, . Now, since is positive and bounded below by zero, it can be interpreted as a Lyapunov function for the closed-loop system with the proposed controller. This means that and will converge to zero as . Following the same steps as in González et al. [12], it can be shown that if weights and are large enough in comparison to and the desired set point is reachable, slacks and will converge to zero and the outputs will converge to the set points. If the targets are not reachable, the distance to the targets will be minimized according to the norm .
4. Simulation Example: The Distillation Column System
The proposed control strategy was tested with a small dimension system of the process industry. The system is part of a distillation column where isobutene is separated from n-butane in the alkylation unit of an oil refinery. This system was studied in Carrapiço et al. [13] that implemented an infinite horizon MPC to control the industrial distillation column. The controlled outputs are the level of liquid in the overhead drum ( %) and the temperature at tray number 68 (, °C). The manipulated variables are the steam flow rate to the reboiler (, t/h), the distillate flow rate (, m3/d), and feed temperature deviation (, °C). Figure 1 shows schematically the structure of the existing regulatory loops of the distillation column.
From practical tests, for a sampling period min, the following two models were obtained at different operating conditions: We observe that is integrating with respect to all the inputs while is stable. We should notice that there is uncertainty of about 20% in time constants and uncertainty of about 50% in process gains of the stable part of the model. Uncertainty in the dead times is also present in the model.
In the set point tracking problem simulated here, the desired value of the liquid level in the overhead drum () was reduced by 1% and the desired value of the temperature at stage #68 () was increased by 1°C. The tuning parameters of the controller that were used in the simulations included here are the following: , , , , , , and . Related to the values of the variables at the initial steady state, the input limits are , , and . The nominal model is represented by model defined in (40), and the true plant can be either or defined in (41). In this work, IHMPC represents the same controller as the one presented in Carrapiço et al. [13], which does not consider the uncertainty in the process model. In the first simulation, it is considered the case in which the true plant has the same model as the nominal plant. Figures 2 and 3 show the system responses for the nominally stable IHMPC proposed in Carrapiço et al. [13] and for the robust controller resulting from the solution to Problem 1 and designated here as IHRMPC. The two controllers have the same tuning parameters. One can see from Figures 2 and 3 that the nominal IHMPC and the robust IHRMPC perform similarly. This is quite surprising because it is widespread in the literature that the robust MPC should have a more conservative behavior than the controller based on the true model as this controller also takes into account the output predictions of model that is quite different from the true model of the plant. This conservative behavior would produce a slower response, which is not observed in the simulation results obtained here. In the second simulation case, model represents the true plant. The IHMPC is still based on model while the proposed IHRMPC contains models and . Figures 4 and 5 show the responses of the distillation system with each controller. We can see that the robust controller stabilizes the true model and the performance of the controller is acceptable. However, the IHMPC based only on the nominal model becomes unstable, which indicates that the controller based only on model cannot be used to control the distillation column at the operating point where model represents the true process.
(a)
(b)
(a)
(b)
(c)
(a)
(b)
(a)
(b)
(c)
5. Conclusion
In this paper, it was presented a new version of the robust infinitive horizon MPC with output feedback for systems with stable and integrating outputs. The adopted model formulation precludes the need to include a state observer, and the computer burden to run the controller may be reduced. To accommodate uncertainty in the process model, the state space model was built in two separate parts: one part is related to the integrating outputs and the other is related to the stable outputs. With this approach, it was possible to include the multiplant uncertainty in the model of stable outputs. The resulting optimization problem is a convex nonlinear program that can be easily solved with the available NLP packages. A simulation example shows that the implementation of the developed approach at real industrial systems may be achieved at least for systems of small to medium dimension.