Research Article  Open Access
Efficient Model Predictive Algorithms for Tracking of Periodic Signals
Abstract
This paper studies the design of efficient model predictive controllers for fastsampling linear timeinvariant systems subject to input constraints to track a set of periodic references. The problem is decomposed into a steadystate subproblem that determines the optimal asymptotic operating point and a transient subproblem that drives the given plant to this operating point. While the transient subproblem is a smallsized quadratic program, the steadystate subproblem can easily involve hundreds of variables and constraints. The decomposition allows these two subproblems of very different computational complexities to be solved in parallel with different sampling rates. Moreover, a receding horizon approach is adopted for the steadystate subproblem to spread the optimization over time in an efficient manner, making its solution possible for fastsampling systems. Besides the conventional formulation based on the control inputs as variables, a parameterization using a dynamic policy on the inputs is introduced, which further reduces the online computational requirements. Both proposed algorithms possess nice convergence properties, which are also verified with computer simulations.
1. Introduction
One of the most attractive features of model predictive control (MPC) is its ability to handle constraints [1]. Many other control techniques are conservative in handling constraints, or even try to avoid activating them, thus, sacrificing the best performance that is achievable. MPC, on the contrary, tends to make the closedloop system operate near its limits and hence produces far better performance. This property of MPC gives it the strength in practice, leading to a wide acceptance by the industry.
A very good example of system operating near its limits is a plant being driven by periodic signals to track periodic references. Under this situation, some of the system constraints will be activated repeatedly, and the optimal operating control signal is far from trivial. Just clipping the control signal to fit into the system constraints produces inferior performance typically. And the loss being considered here is not just a transient loss due to sudden disturbances, but indeed a steadystate loss due to a suboptimal operating point. Therefore, the loss is on long term and severe.
On the other hand, the successful reallife applications of MPC are mostly on systems with slower dynamics such as industrial and chemical processes [2]. The reason is simply that MPC requires a constrained optimization to be carried out online in a receding horizon fashion [3, 4]. Therefore, to apply MPC to fastsampling systems, the computational power needed will be substantial. In any case, because of its great success in slowsampling systems, the trend to extend MPC to fastsampling systems is inevitable, and many recent researches have been carried out to develop efficient methods to implement MPC in such cases. While some of these works focus on unloading the computational burdens [5–9], others emphasize on code optimization [10–12] and new algorithmic paradigms [13–17].
If MPC is applied to the tracking of periodic signals in a receding horizon fashion, the horizon length will be related to the period length, and a long period will imply an online optimization problem of many variables and constraints. For a fastsampling system, it is essentially an attempt to solve a very big computational problem within a very small time frame. In this paper, we shall analyze the structure of this problem and then propose two efficient algorithms for the task. They aim to make the application of MPC to a fastsampling system possible by a slight sacrifice on the transient performance, but the optimal or nearoptimal steadystate performance of periodic tracking will be maintained.
In Section 2, the mathematical formation of the problem will be presented. The two algorithms, one based on the concept of receding horizon quadratic programming and the other based on the idea of dynamic MPC policy, will be presented in Sections 3 and 4, respectively. A comment on the actual implementation will be given in Section 5, followed by some computer simulations in Section 6 to illustrate several aspects of the proposed algorithms. Finally, Section 7 concludes the paper.
To avoid cumbersome notations like , the MPC algorithms in this paper will only be presented as if the current time is , and we shall write instead. The reader is asked to bear in mind that the algorithm is actually to be implemented in a receding horizon fashion.
2. Problem Formulation
Consider a linear timeinvariant plant subject to a periodic disturbance: where the superscript + denotes the timeshift operator, that is, and the disturbance is measurable and periodic with period . The control objective is to construct a control signal such that the plant output will track a specific periodic reference of the same period asymptotically with satisfactory transient performance. The control input is also required to satisfy some linear inequality constraints (e.g., to be within certain bounds). The reference is not necessarily fixed but may be different for different disturbance . (For that reason, it may be more appropriate to call an exogenous signal rather than a disturbance).
The algorithms developed in this paper are motivated by the following situations: (1)the period is very long compared with the typical transient behaviours of the closedloop system; (2)the linear inequality constraints on are persistently active, that is, for any given , there exists a such that will meet at least one of the associated linear equality constraints; (3)there is not sufficient computational power to solve the associated quadratic program completely within one sampling interval unless both the control horizon and the prediction horizon are much shorter than .
As a matter of fact, without the above considerations and restrictions, the problem is not very challenging and can be tackled with standard MPC approaches for linear systems.
The underlying idea of the approach proposed in this paper is that since the transient behaviour of the closedloop system is expected to be much shorter than the period , we should decompose the original control problem into two: one we call the steadystate subproblem and the other we call the transient subproblem. Hence, the transient subproblem can be solved with a control horizon and a prediction horizon much shorter than . While the steadystate subproblem is still very computationally intensive and cannot be solved within one sampling interval, it is not really urgent compared with the transient subproblem, and its computation can be spread over several sampling intervals. Indeed, the two subproblems need not be synchronized even though the transient subproblem depends on the solution to the steadystate subproblem due to the coupled input constraints. The former will utilize the latter whenever the latter is updated and made available to the former. It is only that the transient control will try to make the plant output track a suboptimal reference if the optimal steadystate control is not available in time.
Now let us present the detailed mathematical formulation of our proposed method. Naturally, since both and are periodic with period , the solution should also be periodic of the same period asymptotically, that is, there should exist a periodic signal such that Let and be the corresponding asymptotic solutions of and , and they obviously satisfy the dynamics inherited from (1) and (2): Ideally, we want but this might not be achievable when is required to satisfy the specific linear inequality constraints. Therefore, following the spirit of MPC, we shall find , such that is minimized for some positive definite matrix , where is the asymptotic tracking error defined by and the summation in (7) is over one period of the signals. This is what we call the steadystate subproblem. In Sections 3 and 4 below, we shall present two different approaches to this steadystate subproblem, with an emphasis on their computational efficiencies.
Once the steadystate signals are known, the transient signals defined by satisfy the dynamics derived from (1)–(3), subject to the original linear inequality constraints being applied to . Since the control horizon and the prediction horizon for this transient subproblem are allowed to be much shorter than , it can be tackled with existing MPC algorithms.
It is important to note that in this steadystate/transient decomposition, the steadystate control is actually a feedforward control signal determined from and , whereas the transient control is a feedback control signal depending on . As an unstable plant can only be stabilized by feedback, but the main interest of the current paper is the computational complexity of the steadystate subproblem, we shall not discuss in depth the stabilization issue, which has been studied quite extensively in the MPC literature. Typically, stabilizability of a constrained system using MPC would be cast as the feasibility of an associated optimization problem. For the numerical example in Section 6, we shall conveniently pick a plant where is already stable, and hence the following quadratic cost may be adopted for the transient subproblem: where is the control horizon with , and are chosen positive definite matrices, and is the (weighted) observability gramian obtained from the Lyapunov equation The minimization of is simply a standard quadratic program over for a given . The situation will be more complicated when is not stable, but one wellknown approach is to force the unstable modes to zero at the end of the finite horizon [18].
Remark 1. Essentially, the choice of the cost function (11) with from (12) for a stable means that the projected control action after the finite horizon is set to zero, that is, for since the “tail” of the quadratic cost is then given by This terminal policy is valid because the steadystate subproblem has already required that satisfies the linear inequality constraints imposed on . Hence, is obviously a Lyapunov function which will be further decreased by the receding horizon implementation when the future control turns into an optimization variable from zero.
Remark 2. We have deliberately omitted the matrix in the steadystate cost in (7). The reason is simply that we want to recover the standard linear solution (for perfect asymptotic tracking) as long as does not hit any constraint.
3. SteadyState Subproblem: A Receding Horizon Quadratic Programming Approach
When the periodic disturbance is perfectly known, the steadystate subproblem is also a conceptually simple (but computationally highdimensional) quadratic program. One way to know is simply to monitor and record it over one full period. This, however, does not work well if is subject to sudden changes. For example, the plant to be considered in our simulations in Section 6 is a power quality device called Unified Power Quality Conditioner [19], where consists of the supply voltage and the load current of the power system, and both may change abruptly if there are supply voltage sags/swells and variations in the load demands. Indeed, the main motivation of the receding horizon approach in MPC is that things never turn out as expected and the control signal should adapt in an autonomous manner. If the suddenly changed disturbance can be known precisely only after one full period of observation, the transient response of the steadystate subproblem (not to be confused with the transient subproblem described in Section 2) will be unsatisfactory.
One way to overcome this is to introduce an exogenous model for the signals and , as adopted in [19]. Specifically, we construct a statespace model: and assume that both and are generated from this model. Since and are periodic with period , we have One simple (but not the only) way to construct , as demonstrated similarly in [19] in the continuoustime case, is to make a blockdiagonal matrix with each block taking the form: where is an integer, is the sampling time and . Then the matrices and are just to sum up their respective odd components of . This essentially performs a Fourier decomposition of the signals and , and hence their approximations by and will be arbitrarily good when more and more harmonics are included in the model.
Based on the exogenous model (14)–(17), an observer can be easily constructed to generate (an estimate of) from the measurements of and . From , the model (14)–(17) can then generate predictions of and , and these can be used to find by the quadratic program. The use of the exogenous model (14)–(17) typically allows the changed and to be identified much sooner than the end of one full period.
The quadratic program for the steadystate subproblem can be written as follows: where is the total number of linear inequality constraints. Note that since we assume only input but not state constraints for our original problem, (20) does not depend on and, hence, the feasibility of any remains the same even if there is an abrupt change in (i.e., if is different from the predicted value from (14) and ). Furthermore, the active set of constraints remains the same.
Definition 3. The active set of any feasible satisfying (20) is the subset of such that if and only if .
Next, we present a onestep active set algorithm to solve the quadratic program (19)(20) partially.
Algorithm 4. Given an initial feasible and a working set . Let the set of working constraints be represented by where the inequality sign applies componentwise, that is, each row of , represents a working constraint in (21). (1)Compute the gradient and the null space of , denoted by If , go to step (5). (2)Compute a search direction , where (3)Let (4)If , go to step (5). Otherwise, update to by and add a (blocking) constraint to to form a new working set according to the method described in Remark 7 below. Quit. (5)Update to by Compute the Lagrange multiplier from to see whether any component of is negative. If yes, remove one of the constraints corresponding to a negative component of from to form a new working set according to the method described in Remark 7 below. Quit.
Algorithm 4 can be interpreted as follows. It solves the equalityconstrained quadratic program: and then searches along the direction until it is either blocked by a constraint not in (step (4)) or the optimal is reached (step (5)). This is indeed a single step of the standard active set method (the nullspace approach) for quadratic programming [20, 21] except for the modifications that will be detailed in Remark 7 below. In other words, if we apply Algorithm 4 repeatedly to the new , , it will converge to the solution of the original inequalityconstrained quadratic program (19)(20) within a finite number of steps, and the cost function is strictly decreasing. However, here we only apply a single step of the active set method due to the limited computational power available within one sampling interval. Furthermore, we do not even assume that the single step of computation can be completed within . Let be the time required or allowed to carry out Algorithm 4. To complete the original quadratic program (19)(20) in a receding horizon fashion, we need to forward , to , by rotating the components of by an amount of since it is supposed to be periodic: Obviously, Algorithm 4 will then continue to solve an equivalent quadratic program as long as strictly follows the exogenous dynamics (14). Hence we have the following convergence result.
Proposition 5. Algorithm 4 together with the rotation of the components of in (32) will solve the quadratic program (19)(20) in finite time as long as satisfies the exogenous dynamics (14).
Proof. From the argument above it is easy to see that as long as follows the dynamics (14), the algorithm is consistently solving essentially the same quadratic program. So it remains to check that the convergence proof of the standard active set algorithm remains valid despite the modifications we shall detail in Remark 7, which is indeed the case.
Of course, the most interesting feature of the receding horizon approach is that the solution will adapt autonomously to the new quadratic program if there is an abrupt change in . Since constraint (20) is independent of , an abrupt change in will not destroy the feasibility of the previously determined and the working set determined previously also remains a valid subset of the active set. Hence, the receding horizon active set method will continue to work even though the objective function (19) has changed. However, if it is necessary to include not only control but also state constraints into the original problem formulation, we shall then require a quadratic programming algorithm (other than the active set method in its simplest form) that does not assume the initial guess to be feasible.
Remark 6. There could be two possible ways to arrange the steps in Algorithm 4. One is to update the working set followed by , and the other is to update followed by the working set . In the receding horizon framework, it might seem at first glance that the first choice is the right one, since we shall then avoid basing the optimization of on an “outdated” working set if happens to have changed. However, it turns out that the first choice is actually undesirable. One wellknown “weakness” of the active set method is that it is not so easy to remove a constraint once it enters the working set . This becomes even more a concern in the receding horizon framework. If has changed, and so has the objective function (19), the original stationary obtained in step (5) may no longer be stationary, and it will require at least an additional iteration to identify the new stationary point before we can decide whether any constraint can be dropped from the working set or not. This will seriously slow down the transient response of the steadystate subproblem. Indeed, once has changed, many of the constraints in the previous working set are no longer sensible, and it will be wiser to drop them hastily rather than being too cautious only to find much later that the constraints should still be dropped eventually.
Remark 7. One key element in the active set method of the quadratic program is to add or drop a constraint to or from the working set . The constraint being added belongs to the blocking set , defined as those constraints corresponding to the minimum in (26). Physically, they are the constraints that will be first encountered when we try to move to in (31). The constraint being dropped belongs to the set , defined as those constraints corresponding to a negative component of the Lagrange multiplier in (29). The active set method will converge in finite time no matter which constraint in will be added or which constraint in will be dropped. One standard and popular choice in the conventional active set method is that the one in corresponding to the most negative component of will be dropped, whereas the choice from will be arbitrary. This is a very natural choice when there is no other consideration.
However, in the receding horizon framework, one other (and in fact important) consideration emerges, which is the execution time of the control input(s) associated with a constraint. Specifically, if Algorithm 4 takes time to carry out, then updated in the current iteration will be used to optimize in the next iteration,
but the outcome of that optimization is ready only at , based on which the transient control is computed. Suppose that the transient subproblem takes one sampling interval to solve, then the transient subproblem at will update (see Section 5 below for a more detailed discussion). Hence, the “time priority” of is and from this argument, we choose to drop the constraint in that is associated with the first in this sequence or to add the constraint in that is associated with the last in this sequence (of course the most negative Lagrange multiplier can still be used as the second criterion if two constraints in happen to have the same urgency). The proposal here aims to assign most freedom to the most urgent control input in the optimization, which makes sense in the receding horizon framework since the less urgent inputs may be reoptimized later.
Remark 8. Basically, the approach proposed in this section is to spread the original quadratic program over many intervals, so that each interval only carries out one iteration of the algorithm, and also to ensure that the quadratic program being solved is consistent when the prediction of the exogenous model is valid, but will migrate to a new quadratic program when there is a change in . It is worth mentioning that the original standard MPC is a static controller by nature, since the true solution of a complete quadratic program is independent of the MPC’s decisions in the past (past decisions can help to speed up the computations but will not affect the solution), but by spreading the algorithm over several intervals, it is turned into a dynamic controller with internal state .
4. SteadyState Subproblem: A Dynamic Policy Approach
The approach proposed in Section 3 optimizes . Consequently, the number of (scalar) variables being optimized is proportional to . To further cut down the computations required, this section proposes another approach based on the idea of a dynamic policy, inspired by the works of [13, 22, 23]. This approach optimizes a smaller number of variables typically, and the number is independent from , although the number of linear inequality constraints remains the same. In return, the optimization result is expected to be slightly inferior to that of Section 3 due to the reduction of variables (degree of freedom). However, it should be noted that the number of optimized variables in this second approach is adjustable based upon the designer’s wish.
The central idea of the dynamic policy approach [13, 22, 23] is that instead of optimizing the control input directly, we generate the control input by a dynamic system of which the initial system state is optimized. This is similar to what we have done to and in Section 3. Specifically, we assume is also generated from a statespace model: where This statespace model is designed a priori but the initial state will be optimized online. Obviously, the quadratic program (19)(20) becomes where and is the observability matrix The number of variables in this new quadratic program is the dimension of , denoted by . If is constructed from the method of Fourier decomposition described in Section 3, Shannon’s sampling theorem implies that a sufficiently large but finite will guarantee a full reconstruction of the original optimization variable . On the other hand, a smaller restricts the search to a lower dimensional subspace of and hence the optimization is easier but suboptimal.
One natural choice of the dynamics is to make in the exogenous model (14)–(17). Of course, it should be noted that constrained control is generally a nonlinear problem, and therefore the number of harmonics to be included in may exceed that of and in order to achieve the true optimal performance. However, we could have overdesigned the exogenous model (14)–(17) to include more harmonics in than necessary for and , thus making the choice here not so conservative. The simulation results in Section 6 will demonstrate both cases.
It remains to choose the matrix in (35). The one we suggest here is based on the linear servomechanism theory [24–26], which solves the linear version of our problem when there is no input constraint. Essentially, when there is no constraint, perfect asymptotic tracking (i.e., or ) can be achieved by solving the regulator equation: for the matrices , , and then let which also implies Therefore, to recover the optimal (or perfect) solution in the linear case when does not hit any constraint, the statespace model of may be chosen as However, this statespace model is not guaranteed to be observable. When it is not, the resulting in (37) will become semidefinite instead of strictly positive definite. To overcome this, we suggest to perform an orthogonal state transformation to bring (44) to the Kalman decomposition form and hence obtain a reducedorder model to become (34)(35). It is easy to verify that since is upper blocktriangular and
Certainly, the discussion above only provides a suggestion of how to choose the statespace model for , which we shall also adopt for our simulations in Section 6, but, in general, the designer should feel free to employ any valid statespace model to suit his problem.
Remark 9. Having reparameterized the quadratic program in terms of rather than , we can apply a similar version of Algorithm 4 to (37)(38). In other words, it is not necessary to solve the quadratic program completely within one sampling interval. Instead of rotating the components of to obtain , we obtain by .
5. Implementation Issues and Impacts on Transient Performance
Before we present the simulation results, let us comment on the impact of computational delay on the transient subproblem in Section 2. First of all, we assume that the transient quadratic program can be solved completely within one sampling interval. Therefore, despite the way we presented the cost function in (11), in actual implementation we shall optimize based on at time , instead of based on . The unknown can be projected from the known variables and system dynamics. After the optimization is carried out to obtain , the control input to be executed at will be . Bear in mind that all these calculations can only be based on the best knowledge of the signals at .
Figure 1 summarizes how the unknown variables can be computed from the known variables. The variables in each layer are derived from those variables directly below them, but in actual implementation it is sometimes possible to derive explicit formulas to compute the upper layer from the lower layer bypassing the intermediate layer, thus not requiring those intermediate calculations online. The variable on top is the control action , computed from the variables of the steadystate subproblem on the left, and those of the transient subproblem on the right, separated by a solid line. The variables in the bottom layer, is provided by the observer described in Section 3, is a measurement of the current plant state, and is the control input calculated by the algorithm at . Note that to compute , the values of are needed to form the constraints for the transient quadratic program since the original linear inequality constraints apply to . On the other hand, can be written as a linear function of and (or ) explicitly. Finally, the steadystate subproblem requires a computational time of , implying that the solution provided by the steadystate subproblem at is based on a measurement of at some between and . So in the worst case, is based on some information as old as , which corresponds to a worstcase delay of . For instance, if , the control is computed from a measurement of , , and .
Remark 10. Although we said in Section 2 that the transient subproblem was not the main interest of this paper, it is an ideal vehicle to demonstrate the power of MPC since the “useful freedom” of may have been totally consumed by when the latter hits a constraint. For example, the simulations to be discussed in Section 6 have the constraint If already saturates at , one side of is lost but that could be the only side to make the reduction of possible, thus forcing to zero. So the input constraint does not only restrict the magnitude, but also the time instant to apply the transient control . Such problem is extremely difficult for conventional control techniques.
6. Simulation Results
In this section we use an example to demonstrate the performance of our algorithms by computer simulations. The plant is borrowed from [19] and represents a power quality device called Unified Power Quality Conditioner (UPQC), which has the following continuoustime statespace model: The exogenous input is composed of the supply voltage and the load current, which are periodic at 50 Hz but may consist of higherorder harmonics. The plant output is composed of the load voltage and the supply current, which will be made to track designated pure sine waves of 50 Hz. The control input is composed of two switching signals across the voltage source inverters (VSIs), both of which are required to satisfy the bounds . The general control objective is to maintain to the desired waveforms despite possible fluctuations in like supply voltage sags/swells or load demand changes. To apply the MPC algorithms proposed, we obtain a discretetime version of the above statespace model by applying a sampling interval of ms (i.e., 100 samples per period). Smallsized quadratic programs (such as our transient subproblem) can possibly be solved within such a short time thanks to the stateoftheart code optimization [12], which reports sampling rates in the range of kHz, but to solve a big quadratic program like our steadystate subproblem we shall resort to the technique of Algorithm 4. Note that in our formulation, the transient subproblem and the steadystate subproblem can be solved in parallel. Although the optimization of depends on , the transient control is computed from which is made available by the steadystate subproblem in the previous step. So it is independent of the current steadystate subproblem being solved.
As typical in a power system, we assume only odd harmonics in the supply voltage and the load current. Hence we can reduce the computations in the steadystate subproblem by the following easy modifications from the standard algorithms presented in Sections 3 and 4; may be chosen to represent half of the period instead of the whole period, satisfying instead of (6), and instead of (17) and (36), with instead of . The rotation operation in (32) should also become This cuts down half of the scalar variables as well as constraints in the quadratic program.
The model parameters of the UPQC used in our simulations are summarized in Table 1 for the circuit components and Table 2 for the line and VSI impedances. They are the same as those values in [19], except for V_{dc} which we have changed from 400 V to 320 V so as to produce a saturated control more easily. Note that V_{dc} is the DClink voltage, which determines how big a fluctuation in the supply voltage or load current the UPQC can handle. In other words, saturation occurs when the UPQC is trying to deal with an unexpected voltage sag/swell or load demand that is beyond its designed capability.


The simulation scenario is summarized in Figure 2. Both the supply voltage and the load current consist of odd harmonics up to the 9th order. Despite the harmonics, it is desirable to regulate the load voltage to a fixed pure sine wave, whereas the supply current should also be purely sinusoidal, but its magnitude and phase are selected to maintain a power factor of unity and to match the supply active power to the active power demanded by the load, which means the reference of this supply current is dependent. The waveforms of both and are shown in Figure 2. The simulation scenario is designed such that the steadystate control is not saturated at the beginning. At s, a voltage sag occurs which reduces the supply voltage to 10% of its original value. The UPQC is expected to keep the load voltage unchanged but (gradually) increase the supply current so as to retain the original active power. This will drive into a slightly saturated situation. At s, the load demand increases, causing the reference of the supply current to increase again, and will become deeply saturated. At s, the voltage sag is cleared and the supply voltage returns to its starting value, but the (new) load demand remains. Although the load demand is still higher than its initial value, the required will just be within the bounds of , thus leaving the saturation region to return to the linear region. So, in short, is expected to experience “linear → slightly saturated → deeply saturated →linear but nearly saturated” in this simulation scenario.
To evaluate the performance of our algorithms, we compare them to two other cases. In the first case, instead of Algorithm 4, the complete quadratic program is solved in each iteration of the steadystate subproblem every sampling intervals. We call this case the complete QP, and it serves to indicate how much transient performance has been sacrificed (in theory) by spreading the quadratic program over a number of iterations. In the second case, the constraints are totally ignored, such that the optimal in the steadystate subproblem is supposed to be where is the solution to the regulator equation (41). The transient subproblem can also be solved by where is the optimal statefeedback gain minimizing the transient quadratic cost However, the combined input is still clipped at . We label this control law the multivariable regulator (MVR) following the linear servomechanism theory. This case serves to indicate how bad the quadratic cost can be if a linear control law is used without taking the constraints into consideration.
Note that in both cases, the computational delays discussed in Section 5 will be in force, where instead of will be optimized and the steadystate control is only updated every sampling intervals. In reality, of course the MVR should involve negligible computational delay, whereas the complete QP should need a longer time to solve than Algorithm 4, but we are merely using their associated quadratic costs here to analyze the behaviours of our algorithms.
Figure 3 plots the steadystate cost of our first approach in Section 3 based on receding horizon quadratic programming together with the costs in the other two cases. is assumed to be 3 in this simulation. The transient subproblem has a control horizon of , corresponding to 10 scalar variables and 20 scalar inequality constraints. On the other hand, the steadystate subproblem has corresponding to 100 variables and 200 constraints. As shown in Figure 3, all are zero prior to s and should also settle down to zero after s. It is observed that the transient response of our is pretty close to that of the complete QP during the transitions from “linear” to “slightly saturated" and from “slightly saturated" to “deeply saturated,” but is poorer when it tries to return from “deeply saturated” to “linear.” This can probably be attributed to the weakness of the active set method in removing a constraint from the working set as discussed in Remark 6. Figure 3 also indicates that the MVR settles down to a much higher value when a saturation occurs, due to its ignorance of the control constraints. The exact values of just prior to s and s are summarized in Table 3, which are about 67 times the values of our algorithm.

Figure 4 zooms into the first component of the control input . The top two plots draw attentions to the steadystate control ( is essentially just prior to s and s). It is observed that the differences in between MVR and MPC are very subtle. Compared with the MVR, the MPC merely reaches and leaves its limit ( here) at very slightly different time instants and also produces some “optimized ripples” of less than 0.25% around that limit instead of a “flat” value as adopted in the clipped linear control law, but by doing these little things the MPC manages to bring down by almost one order of magnitude. This demonstrates how nontrivial the optimal can be. We can also see from the plots that only one constraint is active in the “slightly saturated” situation whereas multiple constraints are active in the “deeply saturated” saturation. On the other hand, the bottom two plots in Figure 4 illustrates our discussion in Remark 10. The plots clearly show that during certain moments of the transient stages ( and ), the transient control is “disabled” due to the saturation of the steadystate control . Note that we are labeling the dashed blue curve as “ used to compute ” since it is slightly different from the actual . For instance, is computed from the knowledge of at time , which is not exactly the same as . Obviously, is not just disabled whenever saturates. It happens only when the desired direction of violates the active constraint.
Next, let us look at the performance of our second MPC approach in Section 4 based on dynamic policy. Odd harmonics up to the 9th order are included in resulting in a total of variables. See Table 3. Since the number of variables is much lower than that of the first approach, we assume here, that is, one iteration of Algorithm 4 (equivalent version) is carried out in each sampling interval, whereas the transient subproblem is solved completely within each sampling interval. The transient performance of is plotted in Figure 5. Note that the MVR curve exhibits a slightly different transient from Figure 3 since their values are different. The dynamic policy approach clearly shows a faster transient response than the receding horizon quadratic programming approach, not only because of a smaller but also a smallersized quadratic program overall. However, the drawback is a slightly suboptimal , as indicated in Table 3.
As mentioned in Section 4, it is possible to overdesign , and hence , so that the optimal in this second MPC method will approach the first MPC method. For example, although we only have odd harmonics up to the 9th order in , we may include odd harmonics up to the 29th order in and . The results are also recorded in Table 3, and we see that this value is very close to the optimal one in the first method.
7. Conclusions
To apply MPC to fastsampling systems with input constraints for the tracking of periodic references, efficient algorithms to reduce online computational burdens are necessary. We have decomposed the tracking problem into a computationally complex steadystate subproblem and a computationally simple transient subproblem, and then proposed two approaches to solve the former. The first approach, based on the concept of receding horizon quadratic programming, spreads the optimization over several sampling intervals, thus reducing the computational burdens at the price of a slower transient response. The second approach, based on the idea of a dynamic policy on the control input, further reduces the online computations at the price of a slightly suboptimal asymptotic performance. Despite the limitations, these approaches make the application of MPC to fastsampling systems possible. Their transient behaviours and steadystate optimality have been analyzed via computer simulations, which have also demonstrated that the steadystate subproblem and the transient subproblem can be solved in parallel with different sampling rates. When the methods proposed in this paper are combined with modern code optimizations, the applicability of MPC to the servomechanism of fastsampling constrained systems will be greatly enhanced.
Acknowledgment
This work is supported by HKU CRCG 201008159001.
References
 J. M. Maciejowski, Predictive Control with Constraints, PrenticeHall, New Jersey, NY, USA, 2002.
 S. J. Qin and T. A. Badgwell, “A survey of industrial model predictive control technology,” Control Engineering Practice, vol. 11, no. 7, pp. 733–764, 2003. View at: Publisher Site  Google Scholar
 P. Whittle, Optimization Over Time, Wiley, New York, NY, USA, 1982.
 W. H. Kwon and S. Han, Receding Horizon Control, Springer, New York, NY, USA, 2005.
 A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, “The explicit linear quadratic regulator for constrained systems,” Automatica, vol. 38, no. 1, pp. 3–20, 2002. View at: Publisher Site  Google Scholar
 A. Bemporad, F. Borrelli, and M. Morari, “Minmax control of constrained uncertain discretetime linear systems,” IEEE Transactions on Automatic Control, vol. 48, no. 9, pp. 1600–1606, 2003. View at: Publisher Site  Google Scholar
 A. Bemporad and C. Filippi, “Suboptimal explicit receding horizon control via approximate multiparametric quadratic programming,” Journal of Optimization Theory and Applications, vol. 117, no. 1, pp. 9–38, 2003. View at: Publisher Site  Google Scholar
 Z. Wan and M. V. Kothare, “Robust output feedback model predictive control using offline linear matrix inequalities,” Journal of Process Control, vol. 12, no. 7, pp. 763–774, 2002. View at: Publisher Site  Google Scholar
 Z. Wan and M. V. Kothare, “An efficient offline formulation of robust model predictive control using linear matrix inequalities,” Automatica, vol. 39, no. 5, pp. 837–846, 2003. View at: Publisher Site  Google Scholar
 M. Åkerblad and A. Hansson, “Efficient solution of second order cone program for model predictive control,” International Journal of Control, vol. 77, no. 1, pp. 55–77, 2004. View at: Publisher Site  Google Scholar
 C. V. Rao, S. J. Wright, and J. B. Rawlings, “Application of interiorpoint methods to model predictive control,” Journal of Optimization Theory and Applications, vol. 99, no. 3, pp. 723–757, 1998. View at: Publisher Site  Google Scholar
 J. Mattingley, Y. Wang, and S. Boyd, “Receding horizon control: automatic generation of highspeed solvers,” IEEE Control Systems Magazine, vol. 31, no. 3, pp. 52–65, 2011. View at: Google Scholar
 M. Cannon and B. Kouvaritakis, “Optimizing prediction dynamics for robust MPC,” IEEE Transactions on Automatic Control, vol. 50, no. 11, pp. 1892–1897, 2005. View at: Publisher Site  Google Scholar
 A. Richards, K.V. Ling, and J. Maciejowski, “Robust multiplexed model predictive control,” in Proceedings of the European Control Conference, Kos, Greece, July 2007. View at: Google Scholar
 K. V. Ling, W. K. Ho, Y. Feng, and B. Wu, “Integralsquareerror performance of multiplexed model predictive control,” IEEE Transactions on Industrial Informatics, vol. 7, no. 2, pp. 196–203, 2011. View at: Google Scholar
 D. Li and Y. Xi, “Aggregation based robust model predictive controller for systems with bounded disturbance,” in Proceedings of the 7th World Congress on Intelligent Control and Automation, (WCICA '08), pp. 3374–3379, Chongqing, China, June 2008. View at: Publisher Site  Google Scholar
 D. Li and Y. Xi, “Aggregation based closedloop MPC with guaranteed performance,” in Proceedings of the 48th IEEE Conference on Decision and Control, (CDC/CCC '09), pp. 7400–7405, Shanghai, China, December 2009. View at: Publisher Site  Google Scholar
 J. B. Rawlings and K. R. Muske, “The stability of constrained receding horizon control,” IEEE Transactions on Automatic Control, vol. 38, no. 10, pp. 1512–1516, 1993. View at: Publisher Site  Google Scholar
 K. H. Kwan, Y. C. Chu, and P. L. So, “Modelbased H∞ control of a unified power quality conditioner,” IEEE Transactions on Industrial Electronics, vol. 56, no. 7, pp. 2493–2504, 2009. View at: Publisher Site  Google Scholar
 S. G. Nash and A. Sofer, Linear and Nonlinear Programming, McGrawHill, New York, NY, USA, 1996.
 J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, NY, USA, 2nd edition, 2006.
 A. Gautam, Y.C. Chu, and Y. C. Soh, “Robust MPC with optimized controller dynamics for linear polytopic systems with bounded additive disturbances,” in Proceedings of the 7th Asian Control Conference, (ASCC '09), pp. 1322–1327, Hong Kong, China, August 2009. View at: Google Scholar
 A. Gautam, Y.C. Chu, and Y. C. Soh, “Optimized dynamic policy for receding horizon control of linear timevarying systems with bounded disturbances,” IEEE Transactions on Automatic Control. In press. View at: Google Scholar
 E. J. Davison, “The robust control of a servomechanism problem for linear timeinvariant multivariable systems,” IEEE Transactions on Automatic Control, vol. 21, no. 1, pp. 25–34, 1976. View at: Google Scholar
 B. A. Francis and W. M. Wonham, “The internal model principle of control theory,” Automatica, vol. 12, no. 5, pp. 457–465, 1976. View at: Google Scholar
 B. A. Francis, “The linear multivariable regulator problem,” SIAM Journal on Control Optimization, vol. 15, no. 3, pp. 486–505, 1977. View at: Google Scholar
Copyright
Copyright © 2012 YunChung Chu and Michael Z. Q. Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.