Research Article  Open Access
Yi Ding, Zuhua Xu, Jun Zhao, Zhijiang Shao, "Fast Model Predictive Control Combining Offline Method and Online Optimization with KD Tree", Mathematical Problems in Engineering, vol. 2015, Article ID 982041, 10 pages, 2015. https://doi.org/10.1155/2015/982041
Fast Model Predictive Control Combining Offline Method and Online Optimization with KD Tree
Abstract
Computation time is the main factor that limits the application of model predictive control (MPC). This paper presents a fast model predictive control algorithm that combines offline method and online optimization to solve the MPC problem. The offline method uses a kd tree instead of a table to implement partial enumeration, which accelerates online searching operation. Only a part of the explicit solution is stored in the kd tree for online searching, and the kd tree is updated in runtime to accommodate the change in the operating point. Online optimization is invoked when searching on the kd tree fails. Numerical experiments show that the proposed algorithm is efficient on both smallscale and largescale processes. The average speedup factor in the largescale process is at least 6, the worstcase speedup factor is at least 2, and the performance is less than 0.05% suboptimal.
1. Introduction
Model predictive control (MPC) is widely applied in the process control area because of its ability to handle constraints and MIMO systems. However, the computation burden in every control interval [i.e., solving a quadratic programming (QP) problem] limits the application of MPC within slow processes. Thus, accelerating the computation speed of the MPC is a popular theme in the control community.
Dealing with the QP problem in the MPC mainly has two approaches. The first approach is online optimization, where two algorithms are widely used (i.e., the interiorpoint method and the activeset method). A simple introduction of these methods is presented in [1]. Many researchers attempted to exploit the special structure of MPC and tailor the two algorithms to speed up the solving process. Previous authors exploited the sparsity and other structure features of MPC in [2, 3] to customize the interiorpoint method. QPSchur, which is a QP algorithm based on the activeset method, is proposed in [4] and focuses on largescale MPC. Another online activeset strategy is described in [5] for the fast solution of parametric QPs in MPC; this strategy uses the information of the previous QP under the assumption that the QP problem only slightly changes. Besides the interiorpoint method and activeset method, other online methods include the Newton method [6] and dual gradientprojection method [7].
The second method is the explicit approach proposed by Bemporad et al. [8]. They showed that the solution to the QP problem in linear MPC is a piecewise affine function of the state variables. Thus, a lookup table can be prepared offline by partitioning the feasible region of the state variables. The online work only has two steps, namely, find the critical region (CR) of a certain point and perform simple calculations with corresponding parameters. The traditional optimization computation is avoided and computation time is decreased in this manner. However, the number of candidate CRs that should be checked for optimality increases exponentially in the number of inequality constraints, which is the major drawback of this method. Thus, several researchers propounded some improvements on explicit MPC. A suboptimal solution is computed using approximation in [9]. An algorithm extended to any convex parametric optimization problem is presented in [10]. The CR number is decreased by eliminating regions in [11] if the control law from a neighbor region can guarantee stability, which can also speed up the online searching.
Despite the developments in both online and offline methods, several problems with certain sizes and types still cannot be efficiently solved by the online method or the offline method alone. Several researchers have combined the explicit method and online optimization in recent years to implement fast MPC. The partial enumeration (PE) method is proposed in [12]. The PE method uses a table to store the recently searched CRs and combines it with the online method to achieve fast MPC for largescale problems. However, the PE method is effective only when the disturbances change slowly and have small magnitude. This application limitation can be attributed to the table search being time consuming and the update method in PE tending to make the information in the table converge to several local areas. Thus, we propose a method that combines the explicit MPC and online optimization by using the kd tree structure to obtain fast MPC for both smallscale and largescale problems. The properties of our method guarantee that the proposed method is effective regardless of disturbance magnitude or setpoint changes.
The rest of this paper is organized as follows. Section 2 discusses several background theories used in our method. Section 3 presents the fast MPC algorithm based on kd tree and online optimization. Section 4 applies the algorithm to two processes to verify the efficiency of our design. Finally, Section 5 concludes this paper.
2. Background
2.1. MPC Formulation
We consider a linear timeinvariant system model in the statespace form as follows:where is the state; is the input; and matrices and are fixed matrices with appropriate dimensions. We consider the problem to regulate system (1) from the current state to the origin for simplicity. Reference [8] shows that the controller for a regulation problem can be extended for a trajectory tracking problem with some linear transformation. The following input and state constraints must be fulfilled at each control interval:where constant matrices and vectors , , , and are specified by the users. Here is the number constraints on inputs and is the number of constraints on states.
MPC computes the input at each control interval by solving the following constrained QP problem:where and are positive semidefinite matrices and is a positivedefinite matrix. The prediction horizon and the control horizon is set as being equal to for simplicity. The terminal penalty weight matrix can be calculated by solving a Riccati equation for the unconstrained system.
Only the first control move is applied as input to system (1) after the solution of . The optimization problem ((3a), (3b), (3c), (3d), and (3e)) is solved again in the next control interval based on the assumption that the state is obtained. Thus, the receding horizon control is implemented.
2.2. MPC Computation
By substituting , the optimization problem ((3a), (3b), (3c), (3d), and (3e)) can be rewritten aswhere is the current state and , , , , , and can be easily obtained [8]. The optimality (KKT) condition of the optimization problem in system ((4a), (4b)) is as follows: where denotes the active constraint indices and denotes the inactive constraint indices. Here , , and represent the row subvector/submatrix that corresponds to the active set .
The linear relation between the parameter and optimal solution in the explicit MPC strategy [8] is explicitly expressed based on the equations in system ((4a), (4b)). Equations (5a) and (5b) derive the following: At this point, we assume the linear independence of the rows in . We can then derive the affine function between the parameter and optimal values and as follows:where the coefficients , , , and can be computed as follows:The valid region of the formula in systems ((7a), (7b)) and ((8a), (8b), (8c), (8d)), namely, the “CR” in [8], is based on a certain active set defined by the inequality constraints in (5d) and (5e). We can derive the following inequalities by substituting from system ((7a), (7b)) to check whether a parameter instance is in the certain CR: We then define the feasible region of a specified QP problem as follows:Thus, can be partitioned into several CRs. Figure 1 is an illustration of the partition for a QP problem, where .
The primary idea of the explicit MPC is to calculate and store the information with each valid CR offline. The online work is only to determine the exact CR for the current parameter and compute the optimal value using system ((7a), (7b)). References [8, 13] both proposed efficient algorithms for the offline calculation of CRs. The explicit MPC is only suitable for smallscale problems because the CR number and the time required to locate the CR increases exponentially as the problem’s scale increases. Many works have been published to improve the performance of explicit MPC, which can be roughly categorized into date structure research [14, 15], approximation methods [9, 16], and combinations with online methods [12, 17]. The derivation above is based on a regulation problem and the extension to reference tracking problem can be found in [8].
3. Combined Method with KD Tree
The PE method introduced in [12] combines explicit MPC and online optimization. This method uses a table to store the recently searched CRs and the table is sorted according to the invoke frequency of the each CR. The CR that corresponds to the state is searched in the table to solve the optimization problem for a specific state. If the appropriate CR is found, the control law can be calculated immediately; otherwise, the online optimization method is invoked and the table is updated. This approach is appealing for largescale applications with small disturbances. However, the PE method has two limitations that cannot be ignored. First, searching an item in the table can be time consuming; the linear complexity of the searching queries on the table structure definitely limits the table length. Second, the updating strategy used in the PE method tends to make the CRs in the table converge to several local areas, which causes the PE method to be invalid when large amplitude noise or setpoint change occurs.
The limitation talked above can be illustrated in Figures 2 and 3. Figure 2 shows a 2D case. First, for example, the system runs in Region 1 and the information of CRs around are stored into the table. Then the setpoint changes and forces the state runs in Region 2. At this moment, the information of CRs around Region 2 is added into the table, which can be called a “learning period,” and some CRs in the table will be deleted if the table is over length. In the worst case, if the system runs in Region 2 for a long time and the invoke frequency of CRs around Region 2 accumulates to a high level, thus the information of CRs around Region 1 will all be deleted and only those around Region 2 remain in the table, which tends to be a kind of “convergence.” Then, if the setpoint changes again and drives the states back to Region 1, the PE method has to go through another “learning period” to add the information of CRs around Region 1. This limitation has also been referred to in [12].
We can observe the phenomenon mentioned above in Figure 3. In Figure 3 there are three subplots: the first plot is the time used for online calculation of MPC problem, the second plot is the trajectory of the output, and the third plot describes the result of PE method, in which 1 means that the corresponding CR is found in table and −1 means not found. The model comes from Example 1 in this paper and the length of the table is 15. We first change the setpoint from −1.5 to 1.5 at time ; then we change the setpoint from 1.5 to −1.5 at time , and finally change it back to 1.5 at time . The “learning period” of PE method after is almost as long as its first “learning period” after which means that the information of CRs stored in the table in the first “learning period” has been squeezed out by information stored between and . In fact, increasing the table length in PE method can solve this problem. However, a longer table means more time needed in searching on average and under the worst case. More detailed comparison on numerical experiments can be found in Section 4.
In order to overcome the two weaknesses of the PE method while maintaining its advantages, we propose a new method to organize the CR information in a limited kd tree. That is, a kd tree structure is used to implement the partial enumeration in our method. The “limited” here means that the height of the kd tree is restricted, and the kd tree is updated online to maintain its size.
3.1. KD Tree
The kd tree is a type of binary search tree that deals with multidimensional data [18, 19]. Each node in the kd tree represents a record, which is comprised of keys. Thus, the node can be viewed as a point in the dimensional space. Additionally, the node in the kd tree can also represent a subregion of the entire space. Each node can have at most two sons or successor nodes. The region represented by each successor node is a partition of the region represented by the father node.
In regard to onedimensional searching, each record is described by a single key. The node with a key value less than or equal to this key value lies in the left subtree, whereas the node with a larger key value lies in the right subtree. Thus, the key variable is a discriminator for assigning nodes to the two subtrees. A record is comprised by keys in dimensional searching. The kd tree uses the following strategy to determine the discriminators for the nodes based on its level in the kd tree. The discriminator is specified for each level by cycling through the keys in order. The formula, , is proposed in [18] to describe this relation, where is the index of the discriminator key for level , and the level of the root node is zeros. The complexity of inserting a node to a randomly built tree is , in which is the total number of the records in the kd tree.
At this point, we illustrate the above principles with a 2dimensional example. The kd tree in Figure 4 is constructed by inserting the following points in sequence: (0, 0), (−10, 10), (10, −10), (−40, −20), (−20, 11), and (20, 0). Figure 5 shows the plot in 2dimensional plane that corresponds to this kd tree.
Figure 5 shows that each node in the kd tree represents a region in the dimensional space. The root node represents the entire feasible region, specifically (−50, −50, 50, 50) in the above case. At this point, we adopt (, , , ) to describe a 2dimensional region. The region represented by the node (−10, 10) is (−50, −50, 0, 50), whereas (−20, 11) represents the region (−50, 10, 0, 50).
A brief introduction of kd tree and its insertion operations is discussed above. The next important operation on kd tree is searching. Searching can be classified into four main types according to their query types [18, 19]. We used the socalled “Region Query” in our algorithm, particularly to find the records whose key vectors are located in the region specified by the query. For instance, given the query region (−45, −30, −30, −10) in the above case, the region query returns all the record(s) in it, which is the node (−40, −20). The algorithm for this query type is discussed in detail in [18]; a recursive method is used to find all the nodes whose representative regions are intersected with the query region and decide whether the node is in the region. Reference [20] proves that the complexity of the algorithm is under the worst case. References [18, 19] also illustrate using many simulations in which the algorithm performance is reasonable. Additionally, the size of the query region has an effect on the performance of the searching operation.
Another important operation is deletion, which is also proposed in [18]. Notably, the deletion does not change the kd tree property, and the complexity of deleting a node from a randomly built tree is . The last operation we used in our algorithm is to optimize an unbalanced tree, which is used in the offline preparation.
3.2. Implementation
Section 3.1 states that the key of the node in the kd tree is a vector. Each node in our method stores the CR information, and the chebyshev center of the CR is set as the key vector of the corresponding node. At this point, the chebyshev center of the CR is a point inside the CR that is farthest from the CR exterior [21]. We selected this point to make the key vector representative of the corresponding CR.
We now assume the current stats is obtained when . We first construct a hyperrectangle with the being its center to compute the optimal input . Thus, where is the th element of and is a userdefined parameter to set the size of the hyperrectangle. The “Region Query” on the kd tree is then performed with the hyperrectangle as its argument. We obtain a list of the nodes inside the region after the query. We then check these nodes to find the CR that belongs to. The nodes are checked sequentially according to its distance from . If the exact CR is found, can be calculated easily with (7a), and the invoke frequency is increased by one of this CR. The invoke frequency of each CR is checked when deleting a node form the kd tree. If the corresponding CR is not found, the optimization problem in system ((3a), (3b), (3c), (3d), and (3e)) with a small horizon value is solved. A suboptimal solution is thus obtained, and the computation time is less than that of the origin problem. The above procedures are shown in Figure 6. The feasible region of the parameter is partitioned into CRs, and the dot in each CR is its chebyshev center.
Figure 6 shows that the square point is the current parameter and the CR that it belongs to is a narrow quadrangle. The hyperrectangle with being its center is plotted as a rectangle with the solid line. The three points are found in the region. The three nodes in the kd tree are then checked; the exact CR is discovered; and is computed with the corresponding information. If the current state does not belong to any CR that is found in the region query or any of CR found, as the triangle point and its region in the dashed line showed in Figure 6, it is computed using system ((3a), (3b), (3c), (3d), and (3e)) with a short horizon, and the kd tree is updated. The update includes the following operations: the original optimization problem in system ((4a), (4b)) is solved; the related information of this newly found CR is calculated; and a node that represents this CR is inserted into the kd tree.
If the tree height is over the threshold, the deletion operation is invoked. The deletion operation is manipulated as follows: a node set is built, including a node chain from the newinsert node’s father all the way up to the root node, and the chain terminates at the nodes with two child nodes. However, this node with two child nodes is excluded from the set. The node chain is illustrated in Figure 7. In a kd tree with the height threshold , the newinsert node I makes the tree over height. Thus, the set defined above includes nodes and (i.e., ).
The node in with the minimal invoke frequency is then deleted. The following theorem on the concerned deletion operation can be derived.
Theorem 1. The node set built following the above principles is nonempty; the deletion of any node in lowers the height of the kd tree, which limits the height within the threshold.
Proof. If the newinsert node makes the tree over height, its father node must be an external node (leaf node) before its insertion. Thus, the father node of the newinsert node is exactly in , which guarantees that has at least one element and . The deletion operation affects only the subtree with the node to be deleted being its root according to the properties of the deletion operation in a kd tree [18]. The nodes in together with the newinsert node are exactly the subtree with only one node in each level, like  in Figure 7. Deleting an internal node, which is a certain node in , in this subtree certainly lowers the subtree height. Thus, the deletion operation lowers the height of the former kd tree and limits the height within the threshold.
Remark 2. The updating method discussed above has the following advantages. The update operation is efficient. An average logarithmic complexity can be maintained for the insertion and deletion operation. The reduction in the tree height can also be guaranteed by only one deletion operation. The deletion of the internal node with only one child results in a complete binary tree, which can then benefit the searching operations. The deletion operation affects only several local nodes and does not destroy the balance of the tree over the entire space. Thus some old information will be protected from being deleted and the information stored in KD tree will not converge to some local areas as we illustrated in Figure 2. In addition, the deletion operation can be further customized to tradeoff between the local performance and overall performance. That is, if we only consider the local performance, like PE method, we can change the deletion operation and pick the node to be deleted from the whole tree but not the local areas. The structure of KD tree makes it easier to map the real geometrical relationship into the abstract data structure. The advantages can be observed by a similar simulation compared with Figure 3. In Figure 8, we solve the same problem as in Figure 3 with our method and the height of the KD tree is 6. The second “learning period” after is significantly shorter than the first “learning period” after . Which means that some pieces of information stored in the KD tree in the first “learning period” are preserved. More detailed comparison on numerical experiments can be found in Section 4.
Note that the above calculation is not completed in exactly one sample period and will continue in the next sample period until it is finished. In particular, the tree update is online without increasing the realtime computation load. Moreover, the kd tree can be built and “trained” offline using the history data.
Remark 3. For some largesize CR, if the query parameter lies around the boundary, the node may not be found even though it is in the kd tree. This phenomenon is illustrated in Figure 9. The star point lies in a large CR, but the chebyshev center of the CR is too far from it that no suitable CR is found by the region query. To deal with this defect, when we compute the chebyshev center of a CR and insert the node into the kd tree, we check the radius that corresponds to the center. If the radius is larger than a predefined threshold , the CR is fictitiously cut into two halves along the center at a certain dimension, and the chebyshev centers of the two halves are both inserted into the kd tree to represent the same CR. The above operations are executed, recursively, until the radius that corresponds to each half is less than . If we set for the example in Figure 9, the searching on the adjusted kd tree is illustrated in Figure 10. Figure 10 shows that several large CRs obtain more than one node. Note that the choice for the value of is associated with the parameter in system ((11a), (11b)).
Remark 4. A constrain check step is added before solving the optimization problem in practical implementation. In particular, we first compute the unconstrained solution of (4a), which can be easily obtained with an analytical expression. We then check whether the unconstrained solution meets the constraints in (4b). If the unconstrained solution indeed satisfies the constraints, the optimal value can be obtained immediately.
We summarize two formal algorithms on the basis of the principles introduced above.
Algorithm I (online searching). Consider the following:(1)The unconstrained solution of the original problem is computed; that is, .(2)If the solution satisfies the constraints, that is, , the solution is outputted and the algorithm is terminated.(3)The region with and is computed.(4)The region query of on the kd tree is performed.(5)If a critical region is found, the with the corresponding and is computed.(6)Else, a smallscale problem of the original problem is solved to compute with the short horizon, and the kd tree with state is updated.
Algorithm II (update kd tree). Consider the following.(1)The original QP problem ((3a), (3b), (3c), (3d), and (3e)) is solved, and the active set is obtained.(2)If , terminate and return.(3)The parameters , , , and are calculated.(4)The chebyshev center and radius of the polyhedron defined by the inequalities ((9a), (9b)) are calculated.(5)If and is divided into two small polyhedrons and along , go to step with and .(6)The node with is inserted as the key into the kd tree.(7)If , the node in that is searched for minimal times is deleted.The updated kd tree is finally outputted.
4. Numerical Experiments
Two examples are used to verify the efficiency of the proposed algorithm. The algorithm is compared with the PE algorithm presented in [12]. Four indices from [12] are used to illustrate the algorithm performance. We describe the meaning of the indices in our implementation in this section.
Optimality Rate. , where is the number of decision timepoints in which optimal solution is found in the kd tree (or PE table), and is the total number of the decision timepoints minus the number of the decision timepoints in which the optimal value is computed by constraint check. This index reflects the frequency of the kd tree (or PE table) used in the simulation.
Suboptimal Index. , where is the optimal closedloop objective function value and is the achieved one. This index reflects the influence brought in by the suboptimal solutions computed from the short horizon optimization problems.
Average Speedup Factor. , where is the average time required to compute the optimal solution by using the MATLAB function quadprog; is the average time required to compute the (sub)optimal solution by using our algorithm (or PE algorithm). This index reflects the computation speed improved by our algorithm (or PE algorithm) on average.
WorstCase Speed Factor. , where is the maximum time used to compute the optimal solution with the MATLAB function quadprog; is the maximum time used to compute the (sub)optimal solution by using the combined method. This index reflects the computation speed improved by our algorithm (or PE algorithm) under the worst case.
Example 1. The first example is a randomly generated model from [17], which is a smallscale process with two inputs and three states. The statespace model is presented as follows:In order to test our method and compare its behavior with PE method with setpoint changes, we consider a reference tracking problem and choose a state as output. The constraints on the input and state are and . At this point, we set the horizon , and the control parameters are set as and . is calculated by solving a Riccati equation as in [8]. We set the threshold of the height of the kd tree = 5, 6, 7, 8, 9, 10 to verify the performance of our algorithm. If the right CR is not found in the kd tree, the horizon length of the optimization problem we turn to solve is . The search range in system ((11a), (11b)) is set as . We change the setpoint 20 times randomly in the 2000timepoints simulation to test the efficiency of the algorithm. Additionally, some white noise is acting on the states. The performance of our algorithm is presented in Table 1.
Compared with the quadprog function in MATLAB, our algorithm is more than 9 times faster on average and more than 4 times faster even under the worst case. However, the degradation of the controller’s performance is relatively small. The choice of the is a tradeoff between the computation speed and controller’s performance. If we increase the by one, the total tree capacity is theoretically doubled. A higher tree can store more nodes, so the optimality rate is higher, but it also requires more time to search for a certain node in that tree.
For comparison, the performance of the PE algorithm is presented in Table 2. The number XX in “PEXX” in the table is the length of the table used to store CR information in the PE algorithm. The optimal rate indices of these two algorithms are basically the same, whereas our algorithm is better than the PE algorithm considering the average speed factor index and worstcase speed factor. Due to the setpoint changes in the simulation, the PE method behaves poorly when the table is short (PE1). A longer table improves the performance but consumes more time (PE25~PE200), which is overcome in our method.


Example 2. A largescale process is tested in Example 2 to verify the performance of our algorithm on largescale problems. It is a copolymerization reactor model for the copolymerization of methyl methacrylate (MMA) and vinyl acetate (VA) from [2]. The transfer function is as follows:The five normalized inputs of the process are the flows of the monomer MMA (), monomer VA (), initiator (), transfer agent (), and the temperature of the reactor jacket (). The four normalized outputs of the system are the polymer production rate (), mole fraction of MMA in the polymer (), average molecular weight of the polymer (), and reactor temperature ().
The continuous model is discretized at a sampling rate of 1 Hz, and there are 18 states in the discretized statespace model. The feasible regions for the inputs and states are and . The controller’s parameters are set as , , and , where is the output matrix in the statespace model, and is calculated by solving a Riccati equation as in [8].
We consider a regulation problem in Example 2. The simulation is implemented as follows. First, we initialize the state and input with two randomly generated values in their feasible region. The task of the controller is to regulate the state to zero, and the state and input must not violate their feasible region. The simulation length is set as so that all states have been driven around zero. The above simulation is repeated 30 times as a single experiment. Similar to Example 1, we change the threshold of the height of the kd tree with = 5, 6, 7, 8, 9, 10 to verify the performance of our algorithm. The horizon length with of the optimization problem ((3a), (3b), (3c), (3d), and (3e)) is solved if the right CR is not found in the region query. The search range in system ((11a), (11b)) is set as . White noise is also added on the state to mimic the real application. The performance is illustrated in Table 3.

The suboptimal index is relatively low for largescale processes. The computation speed is accelerated by more than 6 times on average and at least twice even under the worst case, compared with the quadprog function in MATLAB.
The performance of the PE algorithm is illustrated in Table 4. The optimal rate is small if the length of the table is set to one, which means that the table information is hardly used in the simulation. This result is reasonable because the length is too short here, whereas the worstcase speed factor is below one if we build a long table to store information. This condition indicates that the PE algorithm solves the optimization problem slower than the quadprog function in MATLAB under the worst case. It is because, in the worst case, the algorithm has to check every item in the table to make the conclusion that the corresponding CR is not in the table, while, in our algorithm, the nodes which need to be checked are always only a part of all the nodes in the kd tree.

Example 1 shows the performance of our algorithm on randomly generated smallscale processes, whereas Example 2 shows that our algorithm is also efficient on largescale, industryoriented processes. Both examples demonstrate the efficiency of our algorithm. The comparison with the quadprog function in MATLAB illustrates that our algorithm is remarkably faster than general online optimization method. The comparison with the PE method shows that our algorithm is competitive as a combined method based on the combination of explicit MPC and online optimization. Note that, in [12], the indices of and are much greater than one (i.e., ) for the PE method, whereas the two indices are unattractive for the PE method in this study. This condition can be attributed to different reasons. First, different online optimization solvers are used for comparison. Some details in the algorithm implementation can also result in differences. In particular, the performance of our algorithm and that of the PE algorithm can be improved under further optimization of the algorithm design and implementation.
5. Conclusion
Considering combining explicit MPC and online optimization, the kd tree is used in this study to design an algorithm that implements fast MPC. Traditional explicit MPC has been shown to be unsuitable for largescale processes, and the online method is also incapable of fast dynamic problems. Other hybrid methods such as PE are only efficient in certain scenarios where setpoint signals change slowly and disturbances have small magnitude. Our algorithm is hardly influenced by setpoint changes or disturbances with the introduction of the kd tree. Numerical experiments show that the algorithm is significantly faster than the general online optimization and PE methods with small degradation in performance. A copolymerization reactor model is tested with 500 decision variables and 4600 constraints. The suboptimality is less than 0.05%; the average speedup factor is more than 6.3; and the worstcase speedup factor is at least 2.1.
Several extensions can be made based on existing achievements. The node in the kd tree is searched through a region query in our algorithm, whereas the nearest query discussed in [19] can be used to improve the searching efficiency. Moreover, the strategy in our method is to solve a small optimization problem with a short horizon when a correct CR is not found in the kd tree, which can be time consuming. Other techniques can accelerate the QP solution if we can utilize the searching results (i.e., CR information around the query point).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This study is financially supported by the 863 Program of China (no. 2014AA041802) and the National Science Foundation of China (nos. 61273145 and 61273146). The authors acknowledge their support.
References
 S. J. Wright, “Applying new optimization algorithms to model predictive control,” in Chemical Process ControlV, vol. 93 of CACHE, AIChE Symposium Series no. 316, pp. 147–155, American Institute of Chemical Engineers, New York, NY, USA, 1997. View at: Google Scholar
 C. V. Rao, S. J. Wright, and J. B. Rawlings, “Application of interiorpoint methods to model predictive control,” Journal of Optimization Theory and Applications, vol. 99, no. 3, pp. 723–757, 1998. View at: Publisher Site  Google Scholar
 Y. Wang and S. Boyd, “Fast model predictive control using online optimization,” IEEE Transactions on Control Systems Technology, vol. 18, no. 2, pp. 267–278, 2010. View at: Publisher Site  Google Scholar
 R. A. Bartlett, L. T. Biegler, J. Backstrom, and V. Gopal, “Quadratic programming algorithms for largescale model predictive control,” Journal of Process Control, vol. 12, no. 7, pp. 775–795, 2002. View at: Publisher Site  Google Scholar
 H. J. Ferreau, H. G. Bock, and M. Diehl, “An online active set strategy to overcome the limitations of explicit MPC,” International Journal of Robust and Nonlinear Control, vol. 18, no. 8, pp. 816–830, 2008. View at: Publisher Site  Google Scholar
 P. Patrinos, P. Sopasakis, and H. Sarimveis, “A global piecewise smooth Newton method for fast largescale model predictive control,” Automatica, vol. 47, no. 9, pp. 2016–2022, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 P. Patrinos and A. Bemporad, “An accelerated dual gradientprojection algorithm for embedded linear model predictive control,” IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 18–33, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, “The explicit linear quadratic regulator for constrained systems,” Automatica, vol. 38, no. 1, pp. 3–20, 2002. View at: Publisher Site  Google Scholar
 A. Bemporad and C. Filippi, “Suboptimal explicit receding horizon control via approximate multiparametric quadratic programming,” Journal of Optimization Theory and Applications, vol. 117, no. 1, pp. 9–38, 2003. View at: Publisher Site  Google Scholar  MathSciNet
 C. N. Jones and M. Morari, “Polytopic approximation of explicit model predictive controllers,” IEEE Transactions on Automatic Control, vol. 55, no. 11, pp. 2542–2553, 2010. View at: Publisher Site  Google Scholar
 F. J. Christophersen, M. N. Zeilinger, C. N. Jones, and M. Morari, “Controller complexity reduction for piecewise affine systems through safe region elimination,” in Proceedings of the 46th IEEE Conference on Decision and Control (CDC '07), pp. 4773–4778, December 2007. View at: Publisher Site  Google Scholar
 G. Pannocchia, J. B. Rawlings, and S. J. Wright, “Fast, largescale model predictive control by partial enumeration,” Automatica, vol. 43, no. 5, pp. 852–860, 2007. View at: Publisher Site  Google Scholar
 P. Tøndel, T. A. Johansen, and A. Bemporad, “An algorithm for multiparametric quadratic programming and explicit MPC solutions,” Automatica, vol. 39, no. 3, pp. 489–497, 2003. View at: Publisher Site  Google Scholar
 P. Tøndel, T. A. Johansen, and A. Bemporad, “Evaluation of piecewise affine control via binary search tree,” Automatica, vol. 39, no. 5, pp. 945–950, 2003. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 F. Bayat, T. A. Johansen, and A. A. Jalali, “Using hash tables to manage the timestorage complexity in a point location problem: application to explicit model predictive control,” Automatica, vol. 47, no. 3, pp. 571–577, 2011. View at: Publisher Site  Google Scholar
 T. A. Johansen, “Approximate explicit receding horizon control of constrained nonlinear systems,” Automatica, vol. 40, no. 2, pp. 293–300, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 M. N. Zeilinger, C. N. Jones, and M. Morari, “Realtime suboptimal model predictive control using a combination of explicit MPC and online optimization,” IEEE Transactions on Automatic Control, vol. 56, no. 7, pp. 1524–1534, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 J. L. Bentley, “Multidimensional binary search trees used for associative searching,” Communications of the ACM, vol. 18, no. 9, pp. 509–517, 1975. View at: Publisher Site  Google Scholar
 J. H. Friedman, J. L. Bentley, and R. A. Finkel, “An algorithm for finding best matches in logarithmic expected time,” ACM Transactions on Mathematical Software, vol. 3, no. 3, pp. 209–226, 1977. View at: Publisher Site  Google Scholar
 D. T. Lee and C. K. Wong, “Worstcase analysis for region and partial region searches in multidimensional binary search trees and balanced quad trees,” Acta Informatica, vol. 9, no. 1, pp. 23–29, 1977. View at: Publisher Site  Google Scholar
 S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, UK, 2004. View at: Publisher Site  MathSciNet
Copyright
Copyright © 2015 Yi Ding et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.