Abstract

Self-triggered control is a control method that the control input and the sampling period are computed simultaneously in sampled-data control systems and is extensively studied in the field of control theory of networked systems and cyber-physical systems. In this paper, a new approach for self-triggered control is proposed from the viewpoint of model predictive control (MPC). First, the difficulty of self-triggered MPC is explained. To overcome this difficulty, two problems, that is, (i) the one-step input-constrained problem and (ii) the -step input-constrained problem are newly formulated. By repeatedly solving either problem in each sampling period, the control input and the sampling period can be obtained, that is, self-triggered MPC can be realized. Next, an iterative solution method for the latter problem and an approximate solution method for the former problem are proposed. Finally, the effectiveness of the proposed approach is shown by numerical examples.

1. Introduction

In recent years, analysis and synthesis of networked control systems (NCSs) have been extensively studied [1, 2]. An NCS is a control system in which plants, sensors, controllers, and actuators are connected through communication networks. In distributed control systems, subsystems are frequently connected via communication networks, and it is important to consider analysis and synthesis of distributed control systems from the viewpoint of NCSs. In the design of NCSs, several technical issues such as packet losses, transmission delays, and communication constraints are included. However, it is difficult to consider these issues in a unified way, and it is suitable to discuss an individual problem. From this viewpoint, several results have been obtained so far (see, e.g., [36]).

In this paper, the periodic paradigm is focused as one of the technical issues in NCSs. The periodic paradigm is that the controller is periodically executed at a given unit of time. The period is chosen based on CPU processing time, communication bandwidth, and so on. However, in NCSs, communication should occur, when there exists important information, which must be transmitted from the controller to the actuator and/or from the sensor to the controller. In this sense, the periodic paradigm is not necessarily suitable, and it is important to consider a new approach for the design of NCSs. One of the methods to overcome this drawback of the periodic paradigm, self-triggered control has been proposed so far (see, e.g., [712]). Also in the field of cyber-physical systems, this control method is focused. In self-triggered control, the next sampling time at which the control input is recomputed is computed. That is, both the sampling period and the control input are computed simultaneously. In many existing works, first, the continuous-time controller is obtained, and after that, the sampling period such that stability is preserved is computed. However, few results on optimal control have been obtained so far. From the viewpoint of optimal control, for example, a design method based one-step finite horizon boundary has been recently proposed in [13, 14]. In this method, the first sampling period such that the optimal value of the cost function is improved, is computed under the constraint that other sampling periods are given as a constant. However, a nonlinear equation must be solved. Furthermore, input constraints cannot be considered in this method. In [10], the authors have proposed self-triggered model predictive control using Taylor series expansions. In this method, the control input and the sampling period are computed by solving a quadratic programming (QP) problem, but the convexity of the obtained QP problem has not been guaranteed.

In this paper, we propose two methods for self-triggered model predictive control (MPC) using optimization with horizon one. First, the optimal control problem with horizon one is formulated. However, in constrained systems, one-step prediction may be insufficient, and a longer time interval, in which the input constraint is imposed, is required. Focusing on this fact, another problem in which the time interval with input constraints is enlarged is also formulated. In the former problem, the first sampling period and the first control input are optimized. In the latter problem, the first sampling period and the control input sequence are optimized. Next, an iterative solution method for the latter problem and an approximate solution method for the former problem are proposed. In the iterative solution method, a QP problem is repeatedly solved. In the approximate solution method, the problem is approximated by one QP problem. The obtained QP problem is in general not convex, and we discuss the convexity. By solving either problem according to the receding horizon policy, self-triggered MPC can be realized. Finally, the effectiveness of the proposed approach is shown by a numerical example. The proposed approach provides us a basic result for self-triggered optimal control.

Notation. Let denote the set of real numbers. Let , denote the identity matrix, the zero matrix, respectively. For simplicity, we sometimes use the symbol instead of , and the symbol instead of .

2. Self-Triggered Model Predictive Control

Consider the following continuous-time linear system: where is the state, and is the control input with the input constraint. The vectors are a given constant vector. Let , denote the sampling time. The sampling period is defined by , which is a nonnegative scalar. Assume that the control input is piecewise constant, that is, the control input is given by Hereafter, we denote as . In addition, assume that a pair is controllable.

First, for the system (1), the self-triggered optimal control problem is formulated as follows.

Problem 1 (self-triggered optimal control problem). Suppose that for the system (1), the initial time , the initial state , the final time , and the final step are given. Then, find both a control input sequence and a sampling period sequence minimizing the following cost function: under the following two constraints: where is positive semidefinite, is positive definite, and are a given constant.

Next, we present a procedure of MPC based on the self-triggered strategy.

Procedure of Self-Triggered MPC

Step  1. Set , and give the initial state .

Step  2. Solve Problem 1.

Step  3. Apply only , to the plant.

Step  4. Compute the predicated state by using , , and .

Step  5. Solve Problem 1 by using as .

Step  6. Wait until time .

Step  7. Update , measure , and return to Step 3.

Note here that in this procedure, the timing (i.e., the sampling time) to measure the state and to recompute the control input is computed. In this sense, self-triggered control is realized.

In the above procedure, Problem 1 must be solved repeatedly. However, Problem 1 is in general reduced to a nonlinear programming problem, and it is difficult to solve this problem. Then, it is important to compute a suboptimal and approximate solution for Problem 1. To compute a suboptimal and approximate solution, two problems, which are solved in Steps 2 and 5 instead of Problem 1, will be formulated in the next section.

3. Problem Formulation

In self-triggered MPC, only , is applied to the plant. Hence, it is important to consider the problem of finding suitable and . By solving this problem repeatedly, we can continue to obtain the control input. Thus, a sampling period sequence is approximated as follows: (i) only is a decision variable (i.e., the other sampling periods are given in advance), (ii) holds. In addition, the time interval in Problem 1 is enlarged to . Here, we consider the following two optimal control problems with prediction horizon one. (i)One-step input-constrained problem. (ii)-step input-constrained problem.

In the former problem, the constraint is imposed for only . In the latter problem, the constraint is imposed for , where is given in advance.

Before these problems are formally given, some preparations are given. In the time interval , suppose that is satisfied, where is a given constant (see also Remark 4). In addition, the input constraint , is ignored. Then, the optimal value of the cost function can be derived as , where is a symmetric positive definite matrix, which is a solution of the following discrete-time algebraic Riccati equation: where Under the above preparation, we formulate the one-step input-constrained problem as follows.

Problem 2 (one-step input-constrained problem). Suppose that for the system (1), the initial time , the initial state , and are given. Then, find both a control input and a sampling period minimizing the following cost function: under the following two constraints: where is a given constant.

In [13, 14], a related problem has been discussed, but in these existing results, the above two constraints cannot be imposed. In [10], the authors have considered a more complicated problem with delay compensation. In this paper, the above simplified problem is considered for analyzing the convexity.

In constrained systems, when control is started, the control input is frequently saturated. It is important to determine the time interval of input saturation. Then, one-step prediction may be insufficient, and a longer time interval in which the input constraint is imposed is required. From this viewpoint, another problem, that is, the -step input-constrained problem is formulated.

First, suppose that the input constraint is imposed in the time interval , where is a given integer. In addition, suppose that no input constraint is imposed in the time interval , then, consider the following cost function: The optimal value of can be characterized by , because no input constraint is imposed in the time interval .

Under the above preparation, consider the following -step input-constrained problem.

Problem 3 (-step input-constrained problem). Suppose that for the system (1), the initial time , the initial state , , , and are given. Then, find a control input sequence maximizing a sampling period under the following constraints: where is a given constant, and is the optimal value of the cost function of (10), is the optimal value of the cost function of (10) under .

In Problem 3, control performance can be adjusted by suitably giving . We remark that in this problem, is maximized under certain constraints. Furthermore, in this problem, a control input sequence is computed, but only the first one () is computed on sampling periods. In this sense, this problem is regarded as a kind of the optimal control problem with prediction horizon one.

Hereafter, in Section 4, an iterative solution method for the -step input-constrained problem (Problem 3) will be proposed. In Section 5, an approximate solution method for the one-step input-constrained problem (Problem 2) will be proposed.

Remark 4. The parameter in Problems 2 and 3 is chosen based on the computation time for solving the problem and the dynamics of a given plant. If computation of the problem is not finished until time , then the next control input cannot be applied to the plant. See also the procedure of self-triggered MPC in Section 2.

4. Iterative Solution Method for -Step Input-Constrained Problem

First, for a fixed , consider deriving . The value of can be derived by a similar method. The value of is given by the optimal value of the following optimal control problem.

Problem 5. Suppose that for the system (1), the initial time , the initial state , and , are given. Then, find a control input sequence minimizing the cost function (10) under the input constraint (11).

From the conventional result on sampled-data control theory, Problem 5 can be equivalently rewritten as the following optimal control problem of time-varying discrete-time linear systems with the input constraint (11).

Problem 6. Suppose that the initial time , the initial state , , and , are given. Consider the following discrete-time linear system: where . Then, find a control input sequence minimizing the cost function (10), that is, under the input constraint (11).

Next, consider reducing Problem 6 to a QP problem. Define and . Then, we can obtain , where In addition, we define Then, the cost function (14) can be rewritten as follows: where Finally, and are also defined.

Under the above preparation, Problem 6 is equivalent to the following QP problem.

Problem A. Consider

A QP problem can be solved by using a suitable solver such as MATLAB and IBM ILOG CPLEX [15].

Third, by using the obtained QP problem, we propose an algorithm for solving Problem 3.

Algorithm 7. Step  1. Derive by solving Problem A with .

Step  2. Set and , and give a sufficiently small positive real number .

Step  3. Set .

Step  4. Derive by solving Problem A.

Step  5. If in Problem 3 is satisfied, then set , otherwise set .

Step  6. If is satisfied, then the optimal in Problem 3 is derived as , and the optimal control input sequence is also derived. Otherwise go to Step 3.

In a numerical example (Section 6.1), we will discuss the computation time of Algorithm 7.

Finally, we discuss the stabilization issue. For Problem 6, consider imposing the constraint , where is a given nonnegative function, and is a given constant. If is restricted to , where and is an infinity matrix norm, then the constraint can be transformed into a set of linear inequalities (see, e.g., [16]), and can be embedded in Problem A. Then, the closed-loop system in which the control input derived by Algorithm 7 is applied is asymptotically stable. See, for example, [17, 18] for further details.

5. Approximate Solution Method for One-Step Input-Constrained Problem

In this section, first we derive a solution method for the one-step input-constrained problem (Problem 2). In the proposed solution method, Problem 2 is approximately reduced to a quadratic programming (QP) problem, but the convexity is not guaranteed. Next, we discuss the convexity.

5.1. Proposed Solution Method

First, noting that the control input is piecewise constant, from (7), the cost function of (8) can be equivalently rewritten as We focus on the weight matrices. By using Taylor series expansions, we can obtain the following relation: where Then, the weight matrices of (7) can be rewritten as Next, we focus on the term . From of (21), can be rewritten as where We use Taylor series expansions for . Then, we can obtain From these results, the cost function can be expressed as follows: where In this paper, the second-order truncated Taylor series is used. Furthermore, the term appeared in (28) is approximated as follows: where . Then, we consider the following cost function : where Under these preparations, we can obtain the following theorem.

Theorem 8. The one-step input-constrained problem of Problem 2 is approximately reduced to the following QP problem.

Problem B. Consider where , and

Proof. By rewriting of (31), the cost function in Problem B is derived. In addition, by using , the input constraint in Problem 2 is equivalent to , which is a linear inequality constraint.

By solving Problem B, we can obtain suboptimal and . Problem B is a QP problem, but the cost function is in general nonconvex. A local optimal solution can be derived by using a suitable solver, for example, MATLAB.

Remark 9. In Theorem 8, the accuracy of approximations is not considered. Several existing results on analysis of the truncation error in Taylor series have been obtained so far. In addition, by suitably setting in (30), the approximation of (30) can be regarded as an over-approximation of . Using the above discussion, the upper bound of the optimal value in Problem 2 can be evaluated by solving Problem B.

5.2. Discussion on Convexity

The cost function in Problem B is in general nonconvex. In other words, the matrix is not a positive definite matrix generally. In this subsection, we clarify the reason why is not positive definite.

The matrix can be rewritten as where Since is positive definite, is also positive definite. However, it is obvious that is not positive definite. Therefore, is not a positive definite matrix generally.

Consider approximating the cost function in Problem B, that is, in (31) by a convex function. We define the following positive definite matrix: and we rewrite in (31). Then, we can obtain By approximating the negative term to a linear term, Problem B can be in general reduced to a convex QP problem. Noting that , we can make a two-dimensional graph on and . Using the two-dimensional graph obtained, we can evaluate whether the approximation is reasonable.

6. Numerical Examples

6.1. Iterative Solution Method

First, we show an example of the iterative solution method proposed in Section 4.

Consider the following system: The input constraint is given as . Parameters in Problem 3 are given as follows: , , , , and . In Algorithm 7, we set . Then, can be derived as In addition, we consider two cases, that is, the case of and the case of .

We show the computational result on self-triggered MPC with the -step input-constrained problem of Problem 3. The initial state is given as , and the case of is considered. Figure 1 shows the obtained state trajectory, and Figure 2 shows the control input trajectory. From these figures, we see that the sampling period is nonuniform.

Next, compare two cases. In these cases, the obtained state trajectories are almost the same. The difference between two cases is as follows. In Figures 1 and 2, that is, the case of , the control input at each time is shown as follows:

In the case of , the control input at each time is derived as follows: From these results, we can discuss the following topic. In this example, input saturation is needed to improve the transient behavior. However, in the case of , the time interval of input saturation was not computed suitably. As a result, to derive the state trajectory in time interval , Problem 3 must be solved eight times. In the case of , Problem 3 is solved six times. Hence, it is important to choose a suitable . We remark that in this example, the computational result in the case of is the same as that in the case of . In this sense, is one of the suitable horizons.

In addition, we discuss the effect of changing in (12). In the case of , consider the following cases: , , , , . For each case, the first is obtained as follows: From these results, we see that becomes longer by setting a larger . Since control performance decreases for a larger , it is important to consider the trade-off between and .

Finally, we discuss the computation time for solving the -step input-constrained problem of Problem 3. In the case of , Problem 3 with the different initial state is solved six times. Then, the mean computation time for solving Problem 3 was 6.51 [sec], where we used IBM ILOG CPLEX 11.0 [15] as the MIQP solver on the computer with the Intel Core2 Duo 3.0 GHz processor and the 2 GB memory. In the case of , Problem 3 with the different initial state is solved eight times. Then, the mean computation time was 6.22 [sec]. From these results, it is difficult at this stage to solve Problem 3 in real-time. It is significant to consider several approaches for reducing the computation time. One of the simple methods is that the number of iterations in Algorithm 7 is limited to some integer depending on the computer environment.

6.2. Approximate Solution Method

Next, we show an example of the approximate solution method proposed in Section 5.

Consider the system (39) again. The input constraint is given as . Parameters in the one-step input-constrained problem of Problem 2 are given as follows: , , , and . Since the approximate solution method in Section 5 is derived using an approximation via Taylor series expansions, it is not desirable that the difference between and is large. Therefore, and must set carefully. Furthermore, in this example, Problem B is transformed into a convex QP problem. From and , the term is approximated by .

We show the computational result on self-triggered MPC with the transformed Problem B. The initial state is given as . Figure 3 shows the obtained state trajectory, and Figure 4 shows the control input trajectory. The obtained control input is shown as follow: From these results, we see that also in this example, the sampling period is nonuniform.

Finally, we discuss the computation time for solving the transformed Problem B. The transformed Problem B with the different initial state is solved 17 times. Then, the mean computation time was 0.01 [sec], where we used IBM ILOG CPLEX 11.0 as the QP solver. Since in this example the number of decision variables is only two, computation is very fast.

7. Conclusion

In this paper, we discussed self-triggered MPC of linear systems. Since it is difficult to solve the original problem (Problem 1), two control problems (the one-step input-constrained problem of Problem 2 and the -step input-constrained problem of Problem 3) were formulated, instead of Problem 1. For Problem 3, the iterative solution method was proposed. For Problem 2, the approximate solution method was proposed. In the latter, we also discussed the convexity. The effectiveness of these proposed method was shown by numerical examples. The proposed methods are useful as a new method of self-triggered optimal control.

In the future works, first, it is important to develop a more efficient method for solving Problem 3. Then, the continuation method [19] may be useful. Next, since the proposed method for the one-step input-constrained problem (Problem 2) is an approximate method, it is difficult at the current stage to guarantee the stability of the closed-loop system. The stabilization issue for the one-step input-constrained problem is also important as one of the future works.

Conflict of Interests

The authors declare that they have no conflict of interests.

Acknowledgment

This work was partially supported by Grant-in-Aid for Young Scientists (B) 23760387.