Abstract

New directions in model predictive control (MPC) are introduced. On the one hand, we combine the input-to-state dynamical stability (ISDS) with MPC for single and interconnected systems. On the other hand, we introduce MPC schemes guaranteeing input-to-state stability (ISS) of single systems and networks with time delays. In both directions, recent results of the stability analysis from the mentioned areas are applied using Lyapunov function(al)s to show that the corresponding cost function(al) of the MPC scheme is a Lyapunov function(al). For networks, we show that under a small-gain condition and with an optimal control obtained by an MPC scheme for networks, it has the ISDS property or ISS property, respectively.

1. Introduction

The approach of MPC started in the late 1970s and spread out in the 1990s by an increasing usage of automation processes in the industry. It has a wide range of applications, see the survey papers [1, 2].

The aim of MPC is to control a system to follow a certain trajectory or to steer the solution of a system into an equilibrium point under constraints and unknown disturbances. Additionally, the control should be optimal in view of defined goals, for example, optimal regarding effort. An overview about MPC can be found in the books [35] and the Ph.D. theses [68], for example.

We consider systems with disturbances of the form, where is the unknown disturbance and is a compact and convex set containing the origin. The input is a measurable and essentially bounded control subject to input constraints , where is a compact and convex set containing the origin in its interior. The function is assumed to be locally Lipschitz in uniformly in and to guarantee that a unique solution of (1.1) exists, which is denoted by or in short.

The control input is obtained by an MPC scheme and applied to the system. We are interested in stability of MPC. It was shown in [9] that the application of the control obtained by an MPC scheme to a system does not guarantee that a system without disturbances is asymptotically stable. For stability of a system in applications, it is desired to analyze under which conditions stability of a system can be achieved using an MPC scheme. An overview about existing results regarding stability and MPC for systems without disturbances can be found in [10] and recent results are included in [58]. To design stabilizing MPC controllers for nonlinear systems, a general framework can be found in [11].

Taking the unknown disturbance into account, MPC schemes which guarantee input-to-state stability (ISS) were developed. First results can be found in [12] regarding ISS for MPC of nonlinear discrete-time systems. Furthermore, results using the ISS property with initial states from a compact set, namely, regional-ISS, are given in [6, 13]. In [14, 15], an MPC scheme that guarantees ISS using the so-called min-max approach was given. The approach uses a closed-loop formulation of the optimization problem to compensate the effect of the unknown disturbance.

Stable MPC schemes for interconnected systems were investigated in [6, 16, 17], where in [6, 16] conditions to assure ISS of the whole system were derived and in [17] asymptotically stable MPC schemes without terminal constraints were provided. Note that in [17], the subsystems are not directly connected, but they exchange information over the network to control themselves according to state constraints.

One research topic of this paper provides a new direction in MPC: we combine the input-to-state dynamical stability (ISDS) property, introduced in [18], with MPC for single and interconnected systems. The provided MPC scheme uses the min-max approach (see [14, 15]). Conditions are derived such that single closed-loop systems and whole closed-loop networks with an optimal control obtained by an MPC scheme have the ISDS property. The results of [18] for single systems and the ISDS small-gain theorem for networks (see [19]) are applied to prove the main results of the corresponding section.

The advantage of the usage of ISDS over ISS for MPC is that the ISDS estimation takes only recent values of the disturbance into account due to the memory fading effect, see [18, 19]. In particular, if the disturbance tends to zero, then the ISDS estimation tends to zero, for example. Moreover, the decay rate can be derived using ISDS-Lyapunov functions. This information can be useful for applications of MPC.

In practice, there are problems, where the advantages of ISDS over ISS, in particular the memory fading effect of the ISDS estimation, lead to more efficient controllers with respect to costs. Examples are the control of air planes, robots, or automatic transportation vehicles.

A second research topic of this paper is the stability analysis of MPC schemes for systems with time-delays. In many applications, there occur time-delays, for example, in communication networks, logistic networks, or biological systems. The presence of time-delays can lead to instability of a network, see [9], where it was shown that the application of the control obtained by an MPC scheme to a system does not guarantee that a system without disturbances is asymptotically stable.

Therefore, we are interested in the analysis of networks with time-delays in view of input-to-state stability (ISS). In [2, 20], tools based on the Lyapunov-Razumikhin and Lyapunov-Krasovskii approaches were developed to check, whether a single system with time-delays has the ISS property. Considering networks with time-delays recent results regarding ISS were given in [21] using a small-gain condition.

Considering time-delay systems (TDSs) and MPC, recent results for asymptotically stable MPC schemes of single systems can be found in [22, 23]. In these works, continuous-time TDSs were investigated and conditions were derived, which guarantee asymptotic stability of a TDS using a Lyapunov-Krasovskii approach. Moreover, by the help of Lyapunov-Razumikhin arguments it was shown, how to determine the terminal cost and terminal region, and to compute a locally stabilizing controller.

As a second part of this paper, we investigate the ISS property for MPC of single systems and networks with time-delays. Conditions are derived such that single closed-loop TDSs and whole closed-loop time-delay networks with an optimal control obtained by an MPC scheme have the ISS property. The results of the Lyapunov-Krasovskii approach, introduced in [20] for single systems and the corresponding small-gain theorem proved in [21] for networks with time-delays, are applied to prove the main results of the corresponding section.

Since time-delays and disturbances appear in many problems, the results of the second part of this paper regarding ISS for MPC of time-delay systems can be applied to a huge range of practical problems. Classical examples are not only communication networks, transportation, or production systems, but also biological networks or chemical networks.

In comparison to existing results in the literature, where only ISS for MPC for single systems (see [6, 1315]) and networks (see [6, 16]) without time-delays was investigated, we use, on the one hand, the advantages of ISDS for MPC, in particular the memory fading effect. On the other hand, we use the stability notion ISS for MPC of systems with time-delays and disturbances, where in the literature only MPC schemes for single time-delay systems without disturbances were investigated in view of asymptotic stability (see [22, 23]). Both approaches presented in this paper were never done before, and this paper is a first theoretical step in the mentioned directions.

This paper is organized as follows: the preliminaries are given in Section 2. In Section 3.1, an MPC scheme of single systems guaranteeing ISDS is provided. ISDS for MPC of networks is investigated in Section 3.2, where we prove that each subsystem has the ISDS property and the whole network has the ISDS property using the control obtained by an MPC scheme. In Section 4.1, the ISS property for MPC of single systems is investigated. Networks with time-delays are considered in Section 4.2. Finally, the conclusions and an outlook for future research possibilities can be found in Section 5.

2. Preliminaries

By we denote the transposition of a vector ; furthermore, and denotes the positive orthant , where we use the standard partial order for given by

denotes the Euclidean norm in . The essential supremum norm of a (Lebesgue-) measurable function is denoted by . We denote the set of essentially bounded (Lebesgue-) measurable functions from to by

is the gradient of a function .

For , let denotes the Banach space of continuous functions defined on equipped with the norm and values in . Let . The function is given by .

For a function , we define its restriction to the interval by

Definition 2.1. We define the following classes of functions: We will call functions of class   positive definite.

Now, we recall some results related to ISDS. Therefore, we consider systems of the form where is the (continuous) time, denotes the derivative of the state , and is the input. The function , , is assumed to be locally Lipschitz continuous in uniformly in to have existence and uniqueness of the solution, denoted by or for short, for the given initial value .

The notion of ISDS was introduced in [18].

Definition 2.2 (Input-to-state dynamical stability (ISDS)). The system (2.5) is called input-to-state dynamically stable (ISDS) if there exist , such that for all initial values and all inputs , it holds that for all . is called decay rate, is called overshoot gain, and is called robustness gain.

A useful tool to check whether a system has the ISDS property is the following.

Definition 2.3 (ISDS-Lyapunov function). Given , a function , which is locally Lipschitz on , is called an ISDS-Lyapunov function of the system (2.5) if there exist such that holds, for almost all and all , where solves for a locally Lipschitz continuous function .

The equivalence of ISDS and the existence of an ISDS-Lyapunov function were proved in [18].

Theorem 2.4. The system (2.5) is ISDS with and if and only if for each there exists an ISDS-Lyapunov function .

Remark 2.5. Note that for a system, which possesses the ISDS property, it holds that the decay rate and gains in Definition 2.2 are exactly the same as in Definition 2.3.

Now, consider networks of the form where , , , and are locally Lipschitz in uniformly in . If we define , and , then (2.10) can be written as a system of the form (2.5), which we call the whole system.

The th subsystem of (2.10) is called ISDS if there exists a -function and functions , and such that the solution for all initial values , all inputs , and for all satisfies . are called gains.

We collect all the gains in a matrix , defined by with . This defines a map for by

In view of ISDS of the whole network, we say that satisfies the small-gain condition (SGC) (see [24]) if

To recall the Lyapunov version of the small-gain theorem for ISDS, we need the following.

Definition 2.6. A continuous path is called an -path with respect to if (i)for each , the function is locally Lipschitz continuous on ; (ii)for every compact set , there are constants such that for all points of differentiability of and we have (iii)it holds that ,  for all .

More details about an -path can be found in [2426].

The following proposition is useful for the construction of an ISDS-Lyapunov function for the whole system.

Proposition 2.7. Let be a gain-matrix. If satisfies the small-gain condition (2.13), then there exists an -path with respect to .

The proof can be found in [24], for example.

We assume that for each subsystem of (2.10) there exists a function , which is locally Lipschitz continuous and positive definite. Given , a function , which is locally Lipschitz continuous on , is an ISDS-Lyapunov function of the th subsystem in (2.10) if it satisfies the following:(i) there exists a function such that for all it holds (ii) there exist functions , such that for almost all , all inputs , and it holds that where solves for some locally Lipschitz function .

Now, we recall the main result of [19], which establishes ISDS for networks using Lyapunov functions.

Theorem 2.8. Assume that each subsystem of (2.10) has the ISDS property. This means that for each subsystem and for each there exists an ISDS-Lyapunov function , which satisfies (2.15) and (2.16). Let be given by (2.12), satisfying the small-gain condition (2.13), and let be an -path from Proposition 2.7 with respect to . Then, the whole system (2.5) has the ISDS property and its ISDS-Lyapunov function is given by where .

As a second topic of this paper, we are going to establish ISS with the help of MPC for TDSs of the form where ,, and “” represents the right-hand side derivative. is the maximum involved delay, and is locally Lipschitz continuous on any bounded set. This guarantees that the system (2.18) admits a unique solution on a maximal interval , , which is locally absolutely continuous, see [27, Section 2.6]. We denote the solution by or for short, satisfying the initial condition for any .

The notion of ISS for TDSs reads as follows.

Definition 2.9 (ISS for TDSs). The system (2.18) is called ISS if there exist and such that for all , all , and all it holds that

In [20], ISS-Lyapunov-Krasovskii functionals are introduced to check whether a TDS has the ISS property. Given a locally Lipschitz continuous functional , the upper right-hand side derivate of the functional along the solution is defined according to [27, Chapter 5.2] where is generated by the solution of ,  and  with .

Remark 2.10. Note that in contrast to (2.20), the definition of in [20] is slightly different, since there the functional is assumed to be only continuous and in that case, can take infinite values. Nevertheless, the results in [20] also hold true if the functional is chosen to be locally Lipschitz, according to the results in [28] and using (2.20).

By , we indicate any norm in such that for some the following inequalities hold:

Definition 2.11 (ISS-Lyapunov-Krasovskii functional). A locally Lipschitz continuous functional is called an ISS-Lyapunov-Krasovskii functional for the system (2.18) if there exist functions and functions such that for all .

The next theorem was proved in [20].

Theorem 2.12. If there exists an ISS-Lyapunov-Krasovskii functional for the system (2.18), then the system (2.18) has the ISS property.

Now, we investigate networks with time-delays: we consider interconnected TDSs of the form where , and . denotes the maximal involved delay and can be interpreted as internal inputs of the th subsystem. The functionals are locally Lipschitz continuous on any bounded set. We denote the solution of a subsystem by or for short, satisfying the initial condition for any .

The ISS property for a subsystem of (2.24) reads as follows: the subsystem of (2.24) is ISS if there exist , such that for all it holds

If we define , , , , and , then (2.24) can be written as a system of the form (2.18), which we call the whole system. The Krasovskii functionals for subsystems are as follows.

A locally Lipschitz continuous functional is an ISS-Lyapunov-Krasovskii functional of the th subsystem of (2.24) if there exist functionals , which are positive definite and locally Lipschitz continuous on , functions , , and , such that for all for all , .

The gain-matrix is defined by , , which defines a map as in (2.12).

The next theorem is one of the main results of [21] and provides a construction for an ISS-Lyapunov-Krasovskii functional of the whole system.

Theorem 2.13 (ISS-Lyapunov-Krasovskii theorem for general networks with time-delays). Consider an interconnected system of the form (2.24). Assume that each subsystem has an ISS-Lyapunov-Krasovskii functional , which satisfies the conditions (2.26), . If the corresponding gain-matrix satisfies the small-gain condition (2.13), then is the ISS-Lyapunov-Krasovskii functional for the whole system of the form (2.18), which is ISS, where is an -path as in Definition 2.6 and . The Lyapunov gain is given by .

Now, we present the new directions in MPC: ISDS and ISS for single and interconnected systems with and without time-delays. We start with MPC schemes guaranteeing ISDS.

3. MPC and ISDS

In this section, we combine ISDS and MPC for nonlinear single and interconnected systems. Conditions are derived, which assure ISDS of a system is obtained by application of the control to the system (1.1), and calculated by an MPC scheme.

3.1. Single Systems

We consider systems of the form (1.1) and we use the min-max approach to calculate an optimal control: to compensate the effect of the disturbance , we apply a feedback control law to the system. An optimal control law is obtained by solving the finite horizon optimal control problem (FHOCP), which consists of minimization of the cost function with respect to and maximization of the cost function with respect to the disturbance . The following definition is taken from [14, 15] with a slightly adjustment using here to apply the ISDS property to the FHOCP.

Definition 3.1 (Finite horizon optimal control problem (FHOCP)). Let be given. Let be the prediction horizon and a feedback control law. The finite horizon optimal control problem for a system of the form (1.1) is formulated as where is the initial value of the system at time , the terminal region is a compact and convex set with the origin in its interior, and is essentially bounded, locally Lipschitz in and measurable in . is the stage cost, where penalizes the distance of the state from the equilibrium point of the system and it penalizes the control effort. penalizes the disturbance, which influences the systems behavior. and are locally Lipschitz continuous with , and is the terminal penalty.

The FHOCP will be solved at the sampling instants . The optimal solution is denoted by and , . The optimal cost function is denoted by . The control input to the system (1.1) is defined in the usual receding horizon fashion as

In the following, we need some definitions, which can be found, for example, in [5].

Definition 3.2. (i) A feedback control is called a feasible solution of the FHOCP at time , if for a given initial value at time the feedback controls the state of the system (1.1) into at time , that is, , for all .
(ii) A set is called positively invariant if for all a feedback control keeps the trajectory of the system (1.1) in , that is, for all .

To prove that the system (1.1) with the control obtained by solving the FHOCP has the ISDS property, we need the following assumption.

Assumption 3.3. (1) There exist functions , where is locally Lipschitz continuous such that
(2) The FHOCP in Definition 3.1 admits a feasible solution at the initial time .
(3) There exists a controller such that the system (1.1) has the ISDS property.
(4) For each , there exists a locally Lipschitz continuous function such that the terminal region is a positively invariant set and we have where , , and denotes the derivative of along the solution of system (1.1) with the control from point 3 of this assumption.
(5) For each sufficiently small , it holds that
(6) The optimal cost function is locally Lipschitz continuous.

Remark 3.4. In [6], it is discussed that a different stage cost, for example, by the definition of , can be used for the FHOCP. In view of stability, the stage cost has to fulfill some additional assumptions, see [6, Chapter 3.4].

Remark 3.5. The assumption (3.7) is needed to assure that the cost function satisfies the lower estimation in (2.7). However, we did not investigated whether this condition is restrictive or not. In case of discrete-time systems and the according cost function, the assumption (3.7) is not necessary, see the proofs in [6, 1215].

The following theorem establishes ISDS of the system (1.1), using the optimal control input obtained from solving the FHOCP.

Theorem 3.6. Consider a system of the form (1.1). Under Assumption 3.3, the system resulting from the application of the predictive control strategy to the system, namely, , possesses the ISDS property.

Remark 3.7. Note that the gains and the decay rate of the definition of the ISDS property, Definition 2.2, can be calculated using Assumption 3.3, as it is partially displayed in the following proof.

Proof. We show that the optimal cost function is an ISDS-Lyapunov function, following the steps: (i)the control problem admits a feasible solution for all times ; (ii) satisfies the conditions (2.7) and (2.8). Then, by application of Theorem 2.4, the ISDS property follows.
Let us prove the following feasibility: we suppose that a feasible solution at time exists. For , we construct a control by where is the controller from Assumption 3.3, point 3. Since controls into and is a positively invariant set, keeps the systems trajectory in for under the constraints of the FHOCP. This means that from the existence of a feasible solution for the time , we have a feasible solution for the time . Since we assume that a feasible solution for the FHOCP at the time exists (Assumption 3.3, point 2), it follows that a feasible solution exists for every .
We replace in (3.8) by . Then, it follows from (3.6) that
holds. From this and with (3.5), it holds that
Now, with Assumption 3.3, point 5, we have
This shows that satisfies (2.7). Now, denote . From , we get and therefore
Now, we show that satisfies the condition (2.8). Note that by Assumption 3.3, point 6, is locally Lipschitz continuous. With (3.13), it holds that This leads to For and using the first point of Assumption 3.3, we obtain By definition of and , this implies where the function is locally Lipschitz continuous. We conclude that is an ISDS-Lyapunov function for the system and by application of Theorem 2.4 the system has the ISDS property.

In the next subsection, we transform the analysis of ISDS for MPC of single systems to interconnected systems.

3.2. Interconnected Systems

We consider interconnected systems with disturbances of the form where , measurable and essentially bounded, are the control inputs and are the unknown disturbances. We assume that the states, disturbances, and inputs fulfill the constraints where , and are compact and convex sets containing the origin in their interior.

Now, we are going to determine an MPC scheme for interconnected systems. An overview of existing distributed and hierarchical MPC schemes can be found in [29]. The used scheme in this work is inspired by the min-max approach for single systems as in Definition 3.1, see [14, 15].

At first, we determine the cost function of the th subsystem by where , is the initial value of the th subsystem at time and is a feedback, essentially bounded, locally Lipschitz in and measurable in , where is a compact and convex set containing the origin in its interior. is the stage cost, where . penalizes the disturbance and penalizes the internal input for all . and are locally Lipschitz continuous functions with , and is the terminal penalty of the th subsystem, .

In contrast to single systems, we add the terms , to the cost function due to the interconnected structure of the subsystems. Here, two problems arise: the formulation of an optimal control problem for each subsystem and the calculation/determination of the internal inputs .

We conserve the minimization of with respect to and the maximization of with respect to as in Definition 3.1 for single systems. In the spirit of ISS/ISDS, which treat the internal inputs as “disturbances,” we maximize the cost function with respect to (worst-case approach). Since we assume that , we get an optimal solution , of the control problem.

The drawbacks of this approach are that, on the one hand, we do not use the systems equations (3.19) to predict and, on the other hand, the computation of the optimal solution could be numerically inefficient, especially if the number of subsystems is “huge” or/and the sets are “large.” Moreover, taking into account the worst-case approach, the maximization over , the obtained optimal control for each subsystem could be extremely conservative, which leads to extremely conservative ISS or ISDS estimations.

To avoid these drawbacks of the maximization of with respect to , one could use the system equation (3.19) to predict instead.

A numerically efficient way to calculate the optimal solutions of the subsystems is a parallel calculation. Due to interconnected structure of the system, the information about systems states of the subsystems should be exchanged. But this exchange of information causes that an optimal solution could not be calculated. To the best of our knowledge, no theorem is proved that provides the existence of an optimal solution of the optimal control problem using such a parallel strategy. We conclude that a parallel calculation cannot help in our case.

Another approach of an MPC scheme for networks is inspired by the hierarchical MPC scheme in [30]. One could use the predictions of the internal inputs , as follows: at sampling time all subsystems calculate the optimal solution iteratively. This means that for the calculation of the optimal solution for the th subsystem,the currently “optimized” trajectories of the subsystems will be used, denoted by , and the “optimal” trajectories of the subsystems of the optimization at sampling time will be used, denoted by .

The advantage of this approach would be that the optimal solution is not that much conservative as the min-max approach and the calculation of the optimal solution could be performed in a numerically efficient way, due to the usage of the model to predict the “optimal” trajectories and that the maximization over will be avoided. The drawback is that the optimal cost function of each subsystem depends on the trajectories using this hierarchical approach. Then, to the best of our knowledge, it is not possible to show that the optimal cost functions are ISDS-Lyapunov functions of the subsystems, which is a crucial step for proving ISDS of a subsystem or the whole network, because no helpful estimations for the Lyapunov function properties can be performed due to the dependence of the optimal cost functions of the trajectories .

The FHOCP for the th subsystem reads as follows: where the terminal region is a compact and convex set with the origin in its interior.

The resulting optimal control of each subsystem is a feedback control law, that is, , where , , and is essentially bounded, locally Lipschitz in , and measurable in , for all .

To show that each subsystem and the whole system have the ISDS property using the mentioned distributed MPC scheme, we suppose the following assumption for the th subsystem of (3.19).

Assumption 3.8. (1) There exist functions , such that
(2) The FHOCP admits a feasible solution at the initial time .
(3) There exists a controller such that the th subsystem of (3.19) has the ISDS property.
(4) For each , there exists a locally Lipschitz continuous function such that the terminal region is a positively invariant set and we have for almost all , where , , and denotes the derivative of along the solution of the th subsystem of (3.19) with the control from point 3 of this assumption.
(5) For each sufficiently small it holds that
(6) The optimal cost function is locally Lipschitz continuous.

Now, we can state that each subsystem possesses the ISDS property using the mentioned MPC scheme.

Theorem 3.9. Consider an interconnected system of the form (3.19). Let Assumption 3.8 be satisfied for each subsystem. Then, each subsystem resulting from the application of the control obtained by the FHOCP for each subsystem to the system, namely, possesses the ISDS property.

Proof. Consider the th subsystem. We show that the optimal cost function is an ISDS-Lyapunov function for the th subsystem. We abbreviate .
By following the steps of the proof of Theorem 3.6, we conclude that there exists a feasible solution for all times and that by (3.25) the functional satisfies the condition using . Note that by Assumption 3.8, point 6, is locally Lipschitz continuous. We have that it holds and equivalently which implies for almost all and all , where , and , where is locally Lipschitz continuous.
Since can be chosen arbitrarily, we conclude that each subsystem has an ISDS-Lyapunov function. It follows that each subsystem has the ISDS property.

To investigate whether the whole system has the ISDS property, we collect all functions in a matrix , , which defines a map as in (2.12).

Using the small-gain condition for , the ISDS property for the whole system can be guaranteed.

Corollary 3.10. Consider an interconnected system of the form (3.19). Let Assumption 3.8 be satisfied for each subsystem. If satisfies the small-gain condition (2.13), then the whole system possesses the ISDS property.

Proof. Each subsystem has an ISDS-Lyapunov function with gains . This follows from Theorem 3.9. The matrix satisfies the SGC, and all assumptions of Theorem 2.8 are satisfied. It follows that with , , and , the whole system of the form has the ISDS property.

In the next section, we investigate the ISS property for MPC of TDS.

4. MPC and ISS for Time-Delay Systems

Now, we introduce the ISS property for MPC of TDS. We derive conditions to assure that a single system, a subsystem of a network, and the whole system possess the ISS property applying the control obtained by an MPC scheme for TDS.

4.1. Single Systems

We consider systems of the form (2.18) with disturbances, where is the unknown disturbance and is a compact and convex set containing the origin. The input is an essentially bounded and measurable control subject to input constraints , where is a compact and convex set containing the origin in its interior. The functional has to satisfy the same conditions as in the previous section to assure that a unique solution exists, which is denoted by or in short.

The aim is to find an (optimal) control such that the system (4.1) has the ISS property.

Due to the presence of disturbances, we apply a feedback control structure, which compensates the effect of the disturbance. This means that we apply a feedback control law to the system. In the rest of this section, we assume that is essentially bounded, locally Lipschitz in , and measurable in . The set is assumed to be compact and convex containing the origin in its interior. We obtain an MPC control law by solving the control problem.

Definition 4.1 (Finite horizon optimal control problem with time-delays (FHOCPTD)). Let be the prediction horizon and a feedback control law. The finite horizon optimal control problem with time-delays for a system of the form (4.1) is formulated as where is the initial function of the system at time , and the terminal region and the state constraint set are compact and convex sets with the origin in their interior. is the stage cost, where and are locally Lipschitz continuous with , and is the terminal penalty.

The control problem will be solved at the sampling instants . The optimal solution is denoted by and , and the optimal cost functional is denoted by . The control input to the system (4.1) is defined in the usual receding horizon fashion as

Definition 4.2. (i) A feedback control is called a feasible solution of the FHOCPTD at time if for a given initial function at time the feedback controls the state of the system (4.1) into at time , that is, , for all .
(ii) A set is called positively invariant if for all initial functions a feedback control keeps the trajectory of the system (4.1) in , that is, for all .

For the goal of this section, establishing ISS of TDS with the help of MPC, we need the following.

Assumption 4.3. (1) There exist functions such that
(2) The FHOCPTD in Definition 4.1 admits a feasible solution at the initial time .
(3) There exists a controller such that the system (4.1) has the ISS property.
(4) There exists a locally Lipschitz continuous functional such that the terminal region is a positively invariant set and for all we have where , , and denotes the upper right-hand side derivate of the functional along the solution of (4.1) with the control from point 3 of this assumption.
(5) There exists a function such that for all it holds
(6) The optimal cost functional is locally Lipschitz continuous.

Now, we can state a theorem that assures ISS of MPC for a single time-delay system with disturbances.

Theorem 4.4. Let Assumption 4.3 be satisfied. Then, the system resulting from the application of the predictive control strategy to the system, namely, , , , possesses the ISS property.

Proof. The proof goes along the lines of the proof of Theorem 3.6 with changes according to time-delays and functionals, that is, we show that the optimal cost functional is an ISS-Lyapunov-Krasovskii functional.
For a feasible solution for all times , we suppose that a feasible solution at time exists. We construct a control by where is the controller from Assumption 4.3, point 3, and . steers into and is a positively invariant set. This means that keeps the system trajectory in for under the constraints of the FHOCPTD. This implies that from the existence of a feasible solution for the time , we have a feasible solution for the time . From Assumption 4.3, point 2, there exists a feasible solution for the FHOCPTD at the time and it follows that a feasible solution exists for every .
Replacing in (4.9) by , it follows from (4.7) that hold, and with (4.6) this implies
For the lower bound, it holds that and by (4.8) we have . This shows that satisfies (2.22). Now, we use the notation . With , we have This implies
Note that by Assumption 4.3, point 6, is locally Lipschitz continuous. With (4.14) it holds which leads to Let , and using the first point of Assumption 4.3 we get By definition of and , this implies that is, satisfies the condition (2.23).
We conclude that is an ISS-Lyapunov-Krasovskii functional for the system and by application of Theorem 2.12 the system has the ISS property.

Now, we consider that interconnections of TDS and provide conditions such that the whole network with an optimal control obtained from an MPC scheme has the ISS property.

4.2. Interconnected Systems

We consider interconnected systems with time-delays and disturbances of the form where are the essentially bounded and measurable control inputs and are the unknown disturbances. We assume that the states, disturbances, and inputs fulfill the constraints where , and are compact and convex sets containing the origin in their interior.

We assume the same MPC strategy for interconnected TDS as in Section 3.2. The FHOCPTD for the th subsystem of (4.20) reads as where is the initial function of the th subsystem at time and the terminal region is a compact and convex set with the origin in its interior. is essentially bounded, locally Lipschitz in , and measurable in and is a compact and convex sets containing the origin in its interior. is the stage cost, where . penalizes the disturbance and penalizes the internal input for all . , and are locally Lipschitz continuous functions with , and is the terminal penalty of the th subsystem.

We obtain an optimal solution , where the control of each subsystem is a feedback control law, which depends on the current states of the whole system, that is, , where , .

For the th subsystem of (4.20), we suppose the following assumption.

Assumption 4.5. (1) There exist functions , such that
(2) The FHOCPTD admits a feasible solution at the initial time .
(3) There exists a controller such that the th subsystem of (4.20) has the ISS property.
(4) There exists a locally Lipschitz continuous functional such that the terminal region is a positively invariant set and for all we have where , , and . denotes the upper right-hand side derivate of the functional along the solution of the th subsystem of (4.20) with the control from point 3 of this assumption.
(5) For each , there exists a function such that for all it holds
(6) The optimal cost functional is locally Lipschitz continuous.

Now, we state that each subsystem of (4.20) has the ISS property by application of the optimal control obtained by the FHOCPTD.

Theorem 4.6. Consider an interconnected system of the form (4.20). Let Assumption 4.5 be satisfied for each subsystem. Then, each subsystem resulting from the application of the predictive control strategy to the system, namely, , possesses the ISS property.

Proof. Consider the th subsystem. We show that the optimal cost functional is an ISS-Lyapunov-Krasovskii functional for the th subsystem. We abbreviate .
Following the lines of the proof of Theorem 4.4, we have that there exists a feasible solution of the th subsystem for all times and that the functional satisfies the condition using (4.25) and . Note that by Assumption 4.5, point 6, is locally Lipschitz continuous. We arrive that the following equation holds: This is equivalent to which implies where
This can be shown for each subsystem and we conclude that each subsystem has an ISS-Lyapunov-Krasovskii functional. It follows that the th subsystem is ISS in maximum formulation.

We collect all functions in a matrix , , which defines a map as in (2.12).

Using the small-gain condition for , the following corollary from Theorem 4.6.

Corollary 4.7. Consider an interconnected system of the form (4.20). Let Assumption 4.5 be satisfied for each subsystem. If satisfies the small-gain condition (2.13), then the whole system possesses the ISS property.

Proof. We know from Theorem 4.6 that each subsystem of (4.20) has an ISS-Lyapunov-Krasovskii functional with gains . Since the matrix satisfies the SGC, all assumptions of Theorem 2.13 are satisfied and it follows that the whole system of the form is ISS in maximum formulation, where , , and .

5. Conclusions

We have combined the ISDS property with MPC for nonlinear continuous-time systems with disturbances. For single systems, we have derived conditions such that by application of the control obtained by an MPC scheme to the system, it has the ISDS property, see Theorem 3.6. Considering interconnected systems, we have proved that each subsystem possesses the ISDS property using the control of the proposed MPC scheme, which is Theorem 3.9. Using a small-gain condition, we have shown in Corollary 3.10 that the whole network has the ISDS property.

Considering single systems with time-delays, we have proved in Theorem 4.4 that a TDS has the ISS property using the control obtained by an MPC scheme, where we have used ISS-Lyapunov-Krasovskii functionals. For interconnected TDSs, we have established a theorem, that guarantees that each closed-loop subsystem obtained by application of the control obtained by a decentralized MPC scheme has the ISS property, see Theorem 4.6. From this result and using Theorem 2.13, we have shown that the whole network with time-delays has the ISS property under a small-gain condition, see Corollary 4.7.

In future research, we are going to derive conditions for open-loop MPC schemes to assure ISDS and ISS of TDSs, respectively. The differences of both schemes, closed-loop, and open-loop, will be analyzed and applied in practice.

Note that the results presented here are first steps of the approaches of ISDS for MPC and ISS for MPC with time-delays. More detailed studies should be done in these directions, especially in applications of these approaches. Therefore, numerical algorithms for the implementation of the proposed schemes, as in [5, 7], for example, should be developed. It could be analyzed if and how other existing algorithms could be used or how they should be adapted for implementation for the results presented in this work. The advantages of the usage of ISDS for MPC in contrast to ISS for MPC could be investigated and applied in practice.

Furthermore, one can investigate ISDS and ISS for unconstrained nonlinear MPC, as it was done in [17, 31], for example.

Acknowledgment

This research is funded by the German Research Foundation (DFG) as part of the Collaborative Research Centre 637 “Autonomous Cooperating Logistic Processes: A Paradigm Shift and its Limitations” (SFB 637).