Abstract

We propose the receding horizon control (RHHC) for input-delayed systems. A new cost function for a finite horizon dynamic game problem is first introduced, which includes two terminal weighting terms parameterized by a positive definite matrix, called a terminal weighing matrix. Secondly, the RHHC is obtained from the solution to the finite dynamic game problem. Thirdly, we propose an LMI condition under which the saddle point value satisfies the nonincreasing monotonicity. Finally, we show the asymptotic stability and boundedness of the closed-loop system controlled by the proposed RHHC. The proposed RHHC has a guaranteed performance bound for nonzero external disturbances and the quadratic cost can be improved by adjusting the prediction horizon length for nonzero initial condition and zero disturbance, which is not the case for existing memoryless state-feedback controllers. It is shown through a numerical example that the proposed RHHC is stabilizing and satisfies the infinite horizon performance bound. Furthermore, the performance in terms of the quadratic cost is shown to be improved by adjusting the prediction horizon length when there exists no external disturbance with nonzero initial condition.

1. Introduction

In many industrial and natural dynamic processes, time-delays on states and/or control inputs are often encountered in the transmission of information or material between different parts of a system. Chemical processing systems, transportation systems, communication systems, and power systems are typical examples of time-delay systems. As one of time-delay systems, an input-delayed system is common and preferred for easy modeling and tractable analysis. Much research on input-delayed systems has been made for decades in order to compensate for the deterioration of the performance due to the presence of input delay [15].

For ordinary systems without time delay, the receding horizon control (RHC) or model predictive control (MPC) has attracted much attention from academia and industry because of its many advantages, including ease of computation, good tracking performance, and I/O constraint handling, compared with the popular steady-state infinite horizon linear quadratic (LQ) control [68]. The RHC for ordinary systems has been extended to problem in order to combine the practical advantage of the RHC with the robustness of the control [911]. This work investigated the nonincreasing monotonicity of the saddle point value corresponding to the optimal cost in LQ problems.

For time-delay systems, there are several results for the RHC [1215]. A simple receding horizon control with a special cost function was proposed for state-delayed systems by using a reduction method [12]. However, it does not guarantee closed-loop stability by design, and therefore stability can be checked only after the controller has been designed. The general cost-based RHC for state-delayed systems was introduced in [13]. This method has both state and input weighting terms in the cost function. Furthermore, it has guaranteed closed-loop stability by design. The RHC in [13] is more effective in terms of a cost function since it has a more general form compared with memoryless state-feedback controllers. This RHC is also extended to receding horizon control (RHHC) in [14]. Although the stability and performance boundedness were shown in [14], the advantage of RHHC over the memoryless state-feedback controller was not mentioned there. While the results mentioned above deal with state-delay systems, the results given in [15] deal with the RHC for input-delayed systems. It extends the idea in [13] to input-delayed systems. However, to the best of our knowledge, there exists no result on the receding horizon control for input-delayed systems. The purpose of this paper is to lay the cornerstone for the theory on RHHC for input-delayed systems. The issues such as solution, stability, existence condition, and performance boundedness will be addressed in the main results. Furthermore, the advantage of RHHC for input-delayed systems over the memoryless state-feedback controller will be illustrated by adjusting the prediction horizon length.

The rest of this paper is structured as follows. In Section 2, we obtain a solution to the receding horizon control problem. In Section 3, we derive an LMI condition, under which the nonincreasing monotonicity of a saddle point value holds. In Section 4, we show that the proposed RHHC has asymptotic stability and satisfies performance boundedness. In Section 5, a numerical example is given to illustrate that the proposed RHHC is stabilizing as well as guarantees the performance bound. Finally, the conclusion is drawn in Section 6.

Throughout the paper, the notation implies that the matrix is symmetric and positive definite (positive semi-definite). Similarly, implies that the matrix is symmetric and negative definite (negative semidefinite). “” is used to denote the elements under the main diagonal of a symmetric matrix. and denote the space of square integrable functions on and , respectively.

2. Receding Horizon Control for Input-Delayed System

Consider a linear time-invariant system with an input delay with the initial conditions and on , where is the state, is the control input, is the disturbance signal that belongs to , is the controlled output, and is the constant delay. , , and are constant matrices of appropriate dimensions. is assumed to be a continuous function. In order to obtain the RHHC, we will first consider the finite horizon cost function as follows: where , , , and . We can regard as a function of either signals or feedback strategies. Let and , where is the space of -dimensional continuous vector functions on . Spaces and are strategy spaces, and we will write strategies as and to distinguish them from signals and . If denotes , , then by the definition of .

Let us formulate a dynamic game problem which is a zero sum game, where is the minimizing player and is the maximizing player. If the extremizing operators in (2.3) are interchangeable, then the minimizing and maximizing case are called saddle point strategies. A saddle point solution , satisfies The value is called the saddle point value. For simple notation, the saddle point value will be denoted by throughout this paper, that is, The purpose of this paper is to develop a method to design a control law, , based on the receding horizon concept such that (a) in case of zero disturbance, the closed-loop system is asymptotically stable and (b) with zero initial condition, the closed-loop transfer function from to , that is, , satisfies the -norm bound, for given ,Since the proposed control is based on the receding horizon strategy and the closed-loop system satisfies the -norm bound, such a control will be called the receding horizon control (RHHC).

Remark 2.1. It is noted that the terminal weighting function consists of two terms, parameterized by two matrices and . We will call them terminal weighting matrices in this paper. The purpose of adding a second terminal weighting term, parameterized by , is to take the delay effect into account in a designing a stabilizing RHHC. More specifically, if is chosen properly, the saddle point value satisfies the “nonincreasing monotonicity property,” which will be considered in Section 3.
Before moving on, we introduce a lemma, which establishes a sufficient condition for a control and a disturbance to be saddle point strategies. In the lemma, denotes a continuous and differentiable functional. Furthermore, we will use the notation where is the solution of the system (2.1) resulting from the control and disturbance .

Lemma 2.2. Assume that there exists a continuous functional , and a vector functional and such that for all , all , and all . Then, and for all . That is, and are saddle point solutions and is a saddle point value.

Proof. Similar lemmas are found in [1316]. Even though Lemma 2.2 is different from those lemmas, one can get the idea for the proof without difficulty from the mentioned references. Thus, we omit the proof of the lemma.

From the above lemma, we see that is a saddle point value, that is, . Furthermore, it is noted that for all . This can be verified as follows.

From (2.9), it follows that we have where Since and , we lead to . Consequently, for all .

Before deriving RHHC, we first provide the solution to the finite horizon dynamic game problem in (2.3). The derivation is based on Lemma 2.2. The procedure taken for derivation of the solution is quite lengthy and tedious but similar to that used in [15]. Therefore, we do not provide the detailed derivation here. In order to apply the result of Lemma 2.2, we assume the saddle point value has the form where , , and are determined later on. Using the above saddle point value, the saddle point strategies for the dynamic game problem in (2.3) are given by where , , and are defined as follows: , , and satisfy the following Riccati-type coupled partial differential equations: with boundary conditions where , and . Similarly, , , and satisfy the following Riccati-type partial differential equations: with boundary condition where , and . In addition, and satisfy the following boundary conditions: and are solved backward in time from to . Because the system is time-invariant, the shape of and is only characterized by the difference between the initial time and the final time, that is, . The values of and at the initial time, , vary with . For fixed , the values are all the same at the initial time. For example, with and is equal to with and . If we take receding horizon strategy, and corresponds to and , respectively, where denotes the current time. It means that the difference between the initial time and the terminal time is set to be . Therefore, reduces to a constant matrix regardless of the value of . Let us introduce new notations as follows: Finally, the RHHC is represented as a distributed feedback strategy as follows: It is noted that the feedback strategy is invariant with time. In order to solve Riccati-type coupled partial differential equations (PDEs) given in (2.15) and (2.17), we can utilize a numerical algorithm in [16]. The time required to solve the PDEs is proportional to the prediction horizon length, . However, the realtime computational load for the RHHC remains the same for any prediction horizon length larger than the delay length, .

We have constructed the RHHC from the solution to a finite horizon dynamic game problem. However, the only thing we can say about the control at present is that it is obtained based on the receding horizon strategy. Nothing can be said about the asymptotic stability and -norm boundedness yet. We therefore will investigate those issues in the next two sections.

3. Nonincreasing Monotonicity of a Saddle Point Value

Nonincreasing monotonicity of the saddle point value plays an important role in proving the closed-loop stability and guaranteeing -norm bound for delay-free systems and state-delay systems. As will be shown later, this is also the case with input-delay systems. In what follows, we will show how to choose terminal weighting matrices such that the saddle point value satisfies the nonincreasing monotonicity.

Theorem 3.1. Given , assume that there exist , , , and such that If one chooses terminal weighting matrices and such that and , the saddle point value satisfies the following nonincreasing monotonicity property:

Proof. The derivative of with respect to the terminal time can be written as where the pair is a saddle point solution for and the pair is a saddle point solution for . denotes the state trajectory resulting from the strategies and , and denotes the state resulting from the strategies and . Let us replace the feedback strategies and by and up to , respectively, and use and for . It is noted that, since we have changed strategies, the resulting state trajectory is neither nor . Let us denote the resulting state trajectory by . Then we obtain After substituting into the above, we obtain where is given as It is apparent that, if , the nonincreasing monotonicity in (3.2) holds. can be rewritten as follows: Pre- and postmultiply the above matrix inequality by and set and . From Schur complement, is then equivalently changed into (3.1). This completes the proof.

The nonincreasing monotonicity of the saddle point value implies that the saddle point value does not increase even though we increase the horizon length. As will be shown in the next section, this property plays an important role in RHHC's achieving closed-loop stability and -norm boundedness.

Remark 3.2. It is mentioned that once we obtain feasible matrices , , , and satisfying the LMI (3.1), the controller , where and , is also a stabilizing controller with guaranteed performance bound even though we do not provide the proof here due to the space limitation. The features of the proposed RHHC compared to the controller will be illustrated through a numerical example.

4. Asymptotic Stability and -Norm Boundedness

In this section, we show that the proposed receding horizon control achieves the closed-loop asymptotic stability for zero disturbance and achieves the -norm boundedness for zero initial condition.

Theorem 4.1. Given and , if for , the system (2.1) controlled by the RHHC in (2.21) is asymptotically stable for zero disturbance and satisfies infinite horizon -norm bound for zero initial condition.

Proof. Nonincreasing monotonicity of a saddle point value is a sufficient condition for asymptotic stability and -norm boundedness of the RHHC for state-delay systems. This theorem states that it is also the case with the RHHC for input-delayed systems. The complete proof of the theorem is lengthy but the idea used in [14] can be used for the proof of this theorem without much difficulty. Thus we omit the proof.

An LMI condition on the terminal weighting matrices under which the saddle point satisfies nonincreasing monotonicity is given in Theorem 3.1. Therefore, we lead to the following corollary.

Corollary 4.2. Given , , and , if the LMI (3.1) is feasible and one can obtain two terminal weighting matrices and , the system (2.1) controlled by the proposed RHHC in (2.21) is asymptotically stable for zero disturbance and satisfies the infinite horizon performance bound for zero initial condition.

Remark 4.3. Memoryless state-feedback controllers also have closed-loop stability and satisfy performance bound. In fact, the proposed RHHC does not have an advantage over the existing state-feedback controllers in terms of performance bound as will be shown in the numerical example. However, the proposed RHHC has an advantage over them in a way that the former improves the performance represented in terms of the quadratic cost : by adjusting the prediction horizon length, , in the case of nonzero initial condition with zero disturbance. Control systems are not always subject to disturbances. Thus it may be meaningful to consider situations where disturbances are gone. Then the proposed RHHC may be suitable because it has a guaranteed performance bound and improved quadratic cost. This feature will be illustrated later through a numerical example.

5. Numerical Example

In this section, a numerical example is presented in order to illustrate the feature of the proposed RHHC. Consider an input-delayed system (2.1) whose model parameters are given by It is noted that the system is open-loop unstable because the eigenvalues of are and . State and input weighting matrices and in (2.2) are chosen to be and . For , the terminal weighting matrices and are obtained from Theorem 3.1 as follows: We chose the prediction horizon length to be 1, that is, , and computed an RHHC in (2.21) after solving partial differential equations given in this paper. The obtained RHHC has the form where the shape of is shown in Figure 1. As mentioned in Remark 3.2, we can also obtain a stabilizing controller from Theorem 3.1 as follows:

In order to illustrate the system response to a disturbance input, we applied a disturbance whose shape is given in Figure 2. The state trajectory of the system by the proposed RHHC in (5.3) is compared with that of the system due to the controller in (5.4) in Figure 3. It is seen that the both controllers stabilize the input-delayed system affected by he external disturbance. It looks like that the controller in (5.4) outperforms the proposed RHHC. For quantitative comparison we computed performance. Firstly, for the proposed RHHC, we obtained which supports the fact that the controlled system satisfies the performance bound. For the controller given in (5.4), the obtained performance was 0.1647, which is even better than that of the proposed RHHC. This shows that the proposed RHHC does not have an advantage in terms of performance over existing methods. One may wonder what the feature of the proposed RHHC is or when it is useful. As already mentioned, one prominent advantage of the proposed RHHC is that we can improve the control performance of the system, which is represented in terms of the quadratic cost, by adjusting the prediction horizon length for stabilization problem with no external disturbance. For this illustration, we assumed that the initial state of the system is . In case of zero disturbance, let us define the quadratic cost as follows: Figure 4 shows state trajectories that are obtained by applying the proposed RHHC with different prediction horizon lengths. It also shows the resultant quadratic costs. It is noted that leads to the controller (5.4). It clearly shows that the RHHC with longer achieves smaller quadratic cost. This example illustrates that the proposed RHHC has guaranteed performance bound for nonzero external disturbance and the quadratic performance can be improved by adjusting the prediction horizon length in case of nonzero initial condition and zero disturbance. This feature is never achievable through the conventional memoryless state feedback controller.

6. Conclusions

In this paper, we proposed a receding horizon control (RHHC) for input-delayed systems. Firstly, we proposed a new cost function for a dynamic game problem. The cost function has two terminal weighting terms that are parameterized by two terminal weighting matrices. Secondly, we derived a saddle point solution to a finite horizon dynamic game problem. Thirdly, the receding horizon control was constructed from the obtained saddle point solution. We showed that, under the nonincreasing monotonicity condition of a saddle point value, the proposed receding horizon control is stabilizing and satisfies the performance bound. We proposed an LMI condition on the terminal weighting matrices, under which the saddle point value satisfies the nonincreasing monotonicity. Unlike the conventional memoryless state feedback controller, the proposed RHHC has a feature that the quadratic performance of the controlled system for nonzero initial condition can be improved by adjusting the prediction horizon length.

Acknowledgments

This research was supported by an INHA Research Grant and was also supported by the MKE (The Ministry of Knowledge Economy), Korea, under the CITRC (Convergence Information Technology Research Center) support program (NIPA-2012-H0401-12-1007) supervised by the NIPA (National IT Industry Promotion Agency).