Abstract

Finite-horizon optimal control problems for discrete-time switched linear control systems are investigated in this paper. Two kinds of quadratic cost functions are considered. The weight matrices are different. One is subsystem dependent; the other is time dependent. For a switched linear control system, not only the control input but also the switching signals are control factors and are needed to be designed in order to minimize cost function. As a result, optimal design for switched linear control systems is more complicated than that of non-switched ones. By using the principle of dynamic programming, the optimal control laws including both the optimal switching signal and the optimal control inputs are obtained for the two problems. Two examples are given to verify the theory results in this paper.

1. Introduction

A switched system usually consists of a family of subsystems described by differential or difference equations and a logical rule that dominates the switching among them. Such systems arise in many engineering fields, such as power electronics, embedded systems, manufacturing, and communication networks. In the past decade or so, the analysis and synthesis of switched linear control systems have been extensively studied [128]. Compared with the traditional optimal control problems, not only the control input but also the switching signals needed to be designed to minimize the cost function.

The first focus of this paper is on the finite-horizon optimal regulation for discrete-time switched linear systems. The goal of this paper is to develop a set of optimal control strategies that minimizes the given quadratic cost function. The problem is of fundamental importance in both theory and practice and has challenged researchers for many years. The bottleneck is mostly on the determination of the optimal switching strategy. Many methods have been proposed to tackle this problem. Algorithms for optimizing the switching instants with a fixed mode sequence have been derived for general switched systems in [29] and for autonomous switched systems in [30].

The finite-horizon optimal control problems for discrete-time switched linear control systems are investigated in [31]. Motivated by this work, two kinds of quadratic cost functions are considered in this paper. The former is introduced in [31], where the state and input weight matrices are subsystem dependent. We form the later by ourselves, where the weight matrices are time dependent. According to these two kinds of cost functions, we formulate two finite-horizon optimal control problems. As a result, two novel Riccati mappings are built up. They are equivalent to that in [31]. Actually, the optimal quadratic regulation for discrete-time switched linear systems has been discussed in [31]. However, there are at least one difference between this paper and [31]. That is to say the control strategies proposed in this paper are not the same as that of [31].

This paper is organized into six sections including the introduction. Section 2 presents the problem formulation. Section 3 presents the optimal control of discrete-time switched linear system. Two examples are given in Section 4. Section 5 summaries this paper.

Notations. Notations in this paper are quite standard. The superscript “𝑇” stands for the transpose of a matrix. 𝑅𝑛 and 𝑅𝑛×𝑚 denote the 𝑛 dimensional Euclidean space and the set of all 𝑛×𝑚 real matrices, respectively. The notation 𝑋>0(𝑋0) means the matrix 𝑋 is positive definite (𝑋 is semipositive definite).

2. Problem Formulation

Consider the discrete-time switched linear system defined as 𝑥(𝑘+1)=𝐴𝑟(𝑘)𝑥(𝑘)+𝐵𝑟(𝑘)𝑢(𝑘),𝑘=0,1,𝑁1,(2.1) where 𝑥(𝑘)𝑅𝑛 is the state, 𝑢(𝑘)𝑅𝑝 is the control input, and 𝑟(𝑘)𝑀={1,2,,𝑑} is the switching signal to be designed. For each 𝑖𝑀,𝐴𝑖 and 𝐵𝑖 are constant matrices of appropriate dimension, and the pair (𝐴𝑖,𝐵𝑖) is called a subsystem of (2.1). This switched linear system is time invariant in the sense that the set of available subsystems {(𝐴𝑖,𝐵𝑖)}𝑑𝑖=1 is independent of time 𝑘. We assume that there is no internal forced switching, that is, the system can stay at or switch to any mode at any time instant. It is assumed that the initial state of the system 𝑥(0)=𝑥0 is a constant.

Due to the switching signal, different from the traditional optimal control problem for linear time-invariant systems, two kinds of cost function for finite-horizon optimal control of discrete-time switched linear systems are introduced. The first one is 𝐽1(𝑢,𝑟)=𝑥(𝑁)𝑇𝑄𝑓𝑥(𝑁)+𝑁1𝑗=0𝑥(𝑗)𝑇𝑄𝑟(𝑗)𝑥(𝑗)+𝑢(𝑗)𝑇𝑅𝑟(𝑗),𝑢(𝑗)(2.2) where 𝑄𝑓=𝑄𝑇𝑓0 is the terminal state weight matrix, 𝑄𝑖=𝑄𝑇𝑖>0 and 𝑅𝑖=𝑅𝑇𝑖>0 are running weight matrices for the state and the input for subsystem 𝑖𝑀.

The second one is 𝐽2(𝑢,𝑟)=𝑥(𝑁)𝑇𝑄𝑓𝑥(𝑁)+𝑁1𝑗=0𝑥(𝑗)𝑇𝑄𝑗𝑥(𝑗)+𝑢(𝑗)𝑇𝑅𝑗,𝑢(𝑗)(2.3) where 𝑄𝑓=𝑄𝑇𝑓0 is the terminal state weight matrix, 𝑄𝑖=𝑄𝑇𝑖>0 and 𝑅𝑖=𝑅𝑇𝑖>0 are running weight matrices for the state and the input at the time instant 𝑗{0,1,,𝑁1}.

Remark 2.1. The cost function 𝐽1, is introduced in [31]. In 𝐽1 the weight matrices are subsystem dependent. The cost function 𝐽2 is introduced by us. In this case, the weight matrices are time dependent.

The goal of this paper is to solve the following two finite-horizon optimal control problems for switched linear systems.

Problem 1. Find the 𝑢(𝑗) and 𝑟(𝑗) that minimize 𝐽1(𝑢,𝑟) subject to the system (2.1).

Problem 2. Find the 𝑢(𝑗) and 𝑟(𝑗) that minimize 𝐽2(𝑢,𝑟) subject to the system (2.1).

3. Optimal Solutions

3.1. Solutions to Problem 1

To drive the minimum value of the cost function 𝐽1 subject to system (2.1), we define the Riccati mapping 𝑓𝑖𝑌𝑌 for each subsystem (𝐴𝑖,𝐵𝑖) and weight matrices 𝑄𝑖 and 𝑅𝑖, 𝑖𝑀𝑓𝑖𝐴(𝑃)=𝑖𝐵𝑖𝐾𝑖(𝑃)𝑇𝑃𝐴𝑖𝐵𝑖𝐾𝑖(𝑃)+𝐾𝑇𝑖(𝑃)𝑅𝑖𝐾𝑖(𝑃)+𝑄𝑖,(3.1) where 𝐾𝑖𝑅(𝑃)=𝑖+𝐵𝑇𝑖𝑃𝐵𝑖1𝐵𝑇𝑖𝑃𝐴𝑖.(3.2) Let 𝐻𝑁={𝑄𝑓} be a set consisting of only one matrix 𝑄𝑓. Define the set 𝐻𝑘 for 0𝑘<𝑁 iteratively as 𝐻𝑘=𝑋𝑋=𝑓𝑖(𝑃),𝑖𝑀,𝑃𝐻𝑘+1(3.3) Now we give the main result of this paper.

Theorem 3.1. The minimum value of the cost function 𝐽1 in Problem 1 is 𝐽1(𝑢,𝑟)=min𝑃𝐻0𝑥𝑇0𝑃𝑥0.(3.4) Furthermore, for 𝑘0, if one defines 𝑃𝑘,𝑖𝑘=argmin𝑃𝐻𝑘𝑥(𝑘)𝑇𝑃𝑥(𝑘)(3.5) then the optimal switching signal and the optimal control input at time instant 𝑘 are 𝑟(𝑘)=𝑖𝑘,𝑢(3.6)(𝑘)=𝐾𝑖𝑘𝑃𝑘𝑥(𝑘),(3.7) where 𝐾𝑖𝑘(𝑃𝑘) is defined by (3.2).

Proof. For the cost function 𝐽1, by applying the principle of dynamic programming, we obtain the following Bellman equation when 𝑘=0,1,,𝑁1: 𝐽1,𝑘(𝑢,𝑟)=min𝑖𝑀,𝑢𝑅𝑝𝑥𝑇(𝑘)𝑄𝑖𝑥(𝑘)+𝑢𝑇(𝑘)𝑅𝑖𝑢(𝑘)+𝐽1,𝑘+1(𝑢,𝑟)(3.8) and the terminal condition 𝐽1,𝑁=𝑥𝑇(𝑁)𝑄𝑠𝑥(𝑁).(3.9) Now we will prove that the solution of the Bellman equation (3.8) and (3.9) may be written as 𝐽1,𝑘=min𝑃𝐻𝑘𝑥𝑇(𝑘)𝑃𝑥(𝑘).(3.10) We use mathematical induction to prove that (3.10) holds for 𝑘=0,1,,𝑁.(i)It is easy to see that (3.10) holds for 𝑁.(ii)We assume that (3.10) holds for 𝑘+1, that is, 𝐽1,𝑘+1=min𝑃𝐻𝑘+1𝑥𝑇(𝑘+1)𝑃𝑥(𝑘+1).(3.11)By (3.8), we have 𝐽1,𝑘(𝑢,𝑟)=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇(𝑘)𝑄𝑖𝑥(𝑘)+𝑢𝑇(𝑘)𝑅𝑖𝑢(𝑘)+min𝑃𝐻𝑘+1𝑥𝑇(𝑘+1)𝑃𝑥(𝑘+1)=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇(𝑘)𝑄𝑖𝑥(𝑘)+𝑢𝑇(𝑘)𝑅𝑖𝑢(𝑘)+𝑥𝑇(𝑘+1)𝑃𝑘+1𝑥(𝑘+1)=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇(𝑘)𝑄𝑖𝑥(𝑘)+𝑢𝑇(𝑘)𝑅𝑖+𝐴𝑢(𝑘)𝑖𝑥(𝑘)+𝐵𝑖𝑢(𝑘)𝑇𝑃𝑘+1𝐴𝑖𝑥(𝑘)+𝐵𝑖𝑢(𝑘)=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇𝑄(𝑘)𝑖+𝐴𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘)+𝑢𝑇𝑅(𝑘)𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢(𝑘)+2𝑥𝑇(𝑘)𝐴𝑇𝑖𝑃𝑘+1𝐵𝑖.𝑢(𝑘)(3.12) Let 𝐻𝑖(𝑢)=𝑢𝑇𝑅𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢+2𝑥𝑇(𝑘)𝐴𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢.(3.13) By simple calculation, we have 𝜕𝐻𝑖(𝑢)𝑅𝜕𝑢=2𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢+2𝐵𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘).(3.14) Since 𝑢(𝑘) is unconstrained, its optimal value 𝑢𝑖(𝑘) must satisfy 𝜕𝐻𝑖(𝑢)/𝜕𝑢=0.
It follows that 𝑢𝑖𝑅(𝑘)=𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖1𝐵𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘)=𝐾𝑖𝑃𝑘+1𝑥(𝑘)(3.15) It follows that 𝐽1,𝑘=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇𝑄(𝑘)𝑖+𝐴𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘)+𝑢𝑖𝑇𝑅(𝑘)𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢𝑖(𝑘)+2𝑥𝑇(𝑘)𝐴𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢𝑖(𝑘)=min𝑖𝑀𝑥𝑇𝑄(𝑘)𝑖+𝐴𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘)+𝑥𝑇(𝑘)𝐾𝑇𝑖𝑃𝑘+1𝑅𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝐾𝑖𝑃𝑘+1𝑥(𝑘)2𝑥𝑇(𝑘)𝐴𝑇𝑖𝑃𝑘+1𝐵𝑖𝐾𝑖𝑃𝑘+1𝑥(𝑘)=min𝑖𝑀,𝑃𝐻𝑘+1𝑥𝑇(𝑘)𝑓𝑖(𝑃)𝑥(𝑘)=min𝑃𝐻𝑘𝑥𝑇(𝑘)𝑃𝑥(𝑘).(3.16) Then the optimal switching signal and the optimal control input at time 𝑘 are 𝛾(𝑘)=𝑖𝑘 and 𝑢(𝑘)=𝐾𝑖𝑘(𝑃𝑘)𝑥(𝑘), respectively. It means that (3.10) still holds for 𝑘. This completes the proof.

Remark 3.2. In [31], the optimal control input at time 𝑘 is 𝑢(𝑘)=𝐾𝑖𝑘(𝑃𝑘)𝑥(0), which is different with our result in (3.7).

Remark 3.3. In [31], another Riccati mapping is given by 𝑓𝑖(𝑃)=𝑄𝑖+𝐴𝑇𝑖𝑃𝐴𝑖𝐴𝑇𝑖𝑃𝐵𝑖𝑅𝑖+𝐵𝑇𝑖𝑃𝐵𝑖1𝐵𝑇𝑖𝑃𝐴𝑖.(3.17) It is easy to verify that (3.17) and (3.1) are equivalent to each other. It should be strengthen that there is a matrix inverse operation in (3.17), while, in (3.1) is not. Thus, our result is more convenient for real application.

Remark 3.4. When 𝑀={1}, the switched system (2.1) becomes a constant linear system (𝐴1,𝐵1)=(𝐴,𝐵). In this case, the cost function 𝐽1 becomes 𝐽1(𝑢)=𝑥𝑇(𝑁)𝑄𝑓𝑥(𝑁)+𝑁1𝑗=0𝑥𝑇(𝑗)𝑄𝑥(𝑗)+𝑢𝑇.(𝑗)𝑅𝑢(𝑗)(3.18) The Riccati mapping reduces to a discrete-time Riccati equation 𝑃𝑘=𝐴𝐵𝐾𝑘𝑇𝑃𝑘+1𝐴𝐵𝐾𝑘+𝐾𝑘𝑇𝑅𝐾𝑘+𝑄,(3.19) where 𝐾𝑘=𝑅+𝐵𝑇𝑃𝑘+1𝐵1𝐵𝑇𝑃𝑘+1𝐴.(3.20) It is easy to verify that this novel discrete-time Riccati equation (3.20) is also equivalent to the traditional ones, such as 𝑃𝑘=𝑄+𝐴𝑇𝑃𝑘+1𝐴𝐴𝑇𝑃𝑘+1𝐵𝑅+𝐵𝑇𝑃𝑘+1𝐵1𝐵𝑇𝑃𝑘+1𝑃𝐴,𝑘=𝑄+𝐴𝑇𝑃𝑘+11+𝐵𝑇𝑅1𝐵1𝑃𝐴,𝑘=𝑄+𝐴𝑇𝑃𝑘+1𝐴𝐼+𝐵𝑇𝑅1𝑃𝑘+11𝐴.(3.21)

3.2. Solutions to Problem 2

To drive the minimum value of the cost function 𝐽2 subject to system (2.1), we define the Riccati mapping 𝑓𝑖,𝑘𝑃𝑃 for each subsystem (𝐴𝑖,𝐵𝑖) and weight matrices 𝑄𝑘 and 𝑅𝑘, 𝑖𝑀,𝑘=0,1,,𝑁1𝑓𝑖,𝑘𝐴(𝑃)=𝑖𝐵𝑖𝐾𝑖(𝑃)𝑇𝑃𝐴𝑖𝐵𝑖𝐾𝑖(𝑃)+𝐾𝑇𝑖(𝑃)𝑅𝑘𝐾𝑖(𝑃)+𝑄𝑘,(3.22) where 𝐾𝑖𝑅(𝑃)=𝑘+𝐵𝑇𝑖𝑃𝐵𝑖1𝐵𝑇𝑖𝑃𝐴𝑖.(3.23) Let 𝐽𝑁={𝑄𝑓} be a set consisting of only one matrix 𝑄𝑓. Define the set 𝐿𝑘 for 0𝑘𝑁 iteratively as 𝐿𝑘=𝑋𝑋=𝑓𝑖,𝑘(𝑃),𝑖𝑀,𝑃𝐿𝑘+1.(3.24) Then we give the following theorem.

Theorem 3.5. The minimum value of the cost function 𝐽2 in Problem 2 is 𝐽2(𝑢,𝑟)=min𝑃𝐿0𝑥𝑇0𝑃𝑥0.(3.25) Furthermore, for 𝑘0, if one defines 𝑃𝑘,𝑖𝑘=argmin𝑃𝐿𝑘𝑥(𝑘)𝑇𝑃𝑥(𝑘),(3.26) then the optimal switching signal and the optimal control input at time instant 𝑘 are 𝑟(𝑘)=𝑖𝑘,𝑢(3.27)(𝑘)=𝐾𝑖𝑘𝑃𝑘𝑥(𝑘),(3.28) where 𝐾𝑖𝑘(𝑃𝑘) is defined by (3.23).

The proof is similar to that of Theorem 3.1.

Proof. For the cost function 𝐽2, by applying the principle of dynamic programming, we obtain the following Bellman equation: 𝐽2,𝑘(𝑢,𝑟)=min𝑖𝑀,𝑢𝑅𝑝𝑥𝑇(𝑘)𝑄𝑘𝑥(𝑘)+𝑢𝑇(𝑘)𝑅𝑘𝑢(𝑘)+𝐽2,𝑘+1(𝑢,𝑟),𝑘=0,1,,𝑁1(3.29) and the terminal condition 𝐽2,𝑁=𝑥𝑇(𝑁)𝑄𝑠𝑥(𝑁).(3.30) Now we will prove that the solution of the Bellman equation (3.29) (3.30) may be written as 𝐽2,𝑘(𝑢,𝑟)=min𝑃𝐿𝑘𝑥𝑇(𝑘)𝑃𝑥(𝑘).(3.31) We use mathematical induction to prove that (3.31) holds for 𝑘=0,1,,𝑁.(i)It is easy to verify (3.31) holds for 𝑘=𝑁.(ii)We assume (3.31) holds for 𝑘+1, that is, 𝐽1,𝑘+1(𝑢,𝑟)=min𝑃𝐿𝑘+1𝑥𝑇(𝑘+1)𝑃𝑥(𝑘+1).(3.32)By (3.29), we have 𝐽2,𝑘(𝑢,𝑟)=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇(𝑘)𝑄𝑖𝑥(𝑘)+𝑢𝑇(𝑘)𝑅𝑖𝑢(𝑘)+min𝑃𝐿𝑘+1𝑥𝑇(𝑘+1)𝑃𝑥(𝑘+1)=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇(𝑘)𝑄𝑖𝑥(𝑘)+𝑢𝑇(𝑘)𝑅𝑖𝑢(𝑘)+𝑥𝑇(𝑘+1)𝑃𝑘+1𝑥(𝑘+1)=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇(𝑘)𝑄𝑖𝑥(𝑘)+𝑢𝑇(𝑘)𝑅𝑖𝐴𝑢(𝑘)+𝑖𝑥(𝑘)+𝐵𝑖𝑢(𝑘)𝑇𝑃𝑘+1𝐴𝑖𝑥(𝑘)+𝐵𝑖𝑢(𝑘)=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇𝑄(𝑘)𝑖+𝐴𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘)+𝑢𝑇𝑅(𝑘)𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢(𝑘)+2𝑥𝑇(𝑘)𝐴𝑇𝑖𝑃𝑘+1𝐵𝑖.𝑢(𝑘)(3.33) Let 𝑆𝑖(𝑢)=𝑢𝑇𝑅𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢+2𝑥𝑇(𝑘)𝐴𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢.(3.34) By simple calculation, we have 𝜕𝑆𝑖(𝑢)𝑅𝜕𝑢=2𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢+2𝐵𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘).(3.35) Since 𝑢(𝑘) is unconstrained, its optimal value 𝑢𝑖(𝑘) must satisfy 𝜕𝑆𝑖(𝑢)/𝜕𝑢=0.
It follows that 𝑢𝑖𝑅(𝑘)=𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖1𝐵𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘)=𝐾𝑖𝑃𝑘+1𝑥(𝑘).(3.36) It follows that 𝐽2,𝑘(𝑢,𝑟)=min𝑖𝑀,𝑢(𝑘)𝑅𝑝𝑥𝑇𝑄(𝑘)𝑖+𝐴𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘)+𝑢𝑖𝑇𝑅(𝑘)𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢𝑖(𝑘)+2𝑥𝑇(𝑘)𝐴𝑇𝑖𝑃𝑘+1𝐵𝑖𝑢𝑖(𝑘)=min𝑖𝑀𝑥𝑇𝑄(𝑘)𝑖+𝐴𝑇𝑖𝑃𝑘+1𝐴𝑖𝑥(𝑘)+𝑥𝑇(𝑘)𝐾𝑇𝑖𝑃𝑘+1𝑅𝑖+𝐵𝑇𝑖𝑃𝑘+1𝐵𝑖𝐾𝑖𝑃𝑘+1𝑥(𝑘)2𝑥𝑇(𝑘)𝐴𝑇𝑖𝑃𝑘+1𝐵𝑖𝐾𝑖𝑃𝑘+1𝑥(𝑘)=min𝑖𝑀,𝑃𝐿𝑘+1𝑥𝑇(𝑘)𝑓𝑖(𝑃)𝑥(𝑘)=min𝑃𝐿𝑘𝑥𝑇(𝑘)𝑃𝑥(𝑘).(3.37) Then the optimal switching signal and the optimal control input at time 𝑘 are 𝛾(𝑘)=𝑖𝑘 and 𝑢(𝑘)=𝐾𝑖𝑘(𝑃𝑘)𝑥(𝑘), respectively. It means that (3.31) still holds for 𝑘. This completes the proof.

4. Examples

Example 4.1. Let us consider the following discrete-time switched linear system: 𝑥(𝑘+1)=𝐴𝜎(𝑘)𝑥(𝑘)+𝐵𝜎(𝑘)𝑢(𝑘),𝑘=0,1,,𝑁1,𝜎(𝑘)𝑀={1,2},(4.1) where 𝐴1=diag(1,2),𝐴2=diag(10,10),𝐵1=11,𝐵1=12.(4.2) The parameters in simulations are as follows: 𝑄1=diag(0.1,0.1),𝑄2=(0.2,0.2),𝑅1=1,𝑅2=0.1,𝑄𝑓=diag(1,1),𝑁=400.(4.3) We design the controllers with the approach in Theorem 3.1, at the initial state 𝑥0=[11]𝑇 of the system; the state response of closed-loop discrete-time switched linear system is as in Figure 1.

Example 4.2. Let us consider the following discrete-time switched linear system borrowed from [32]: 𝑥(𝑘+1)=𝐴𝜎(𝑘)𝑥(𝑘)+𝐵𝜎(𝑘)𝑢(𝑘),𝑘=0,1,,𝑁1,𝜎(𝑘)𝑀={1,2},(4.4) where 𝐴1=0.5450.4300.1850.610,𝐴2=0.5550.370.2150.590,𝐵1=10.5,𝐵1=13.(4.5) The parameters in simulations are as follows: 𝑄1=diag(1,1),𝑄2=diag(2,2),𝑅1=0.1,𝑅2=0.1,𝑄𝑓=diag(10,10),𝑁=50.(4.6) We design the controller in with the approach in Theorem 3.1, at the initial state 𝑥0=[12]𝑇of the system; the closed-loop state response of discrete-time switched linear system is as in Figure 2.

5. Conclusions

Based on dynamic programming, finite-horizon optimal quadratic regulations are studied for discrete-time switched linear systems. The finite-horizon optimal quadratic control strategies minimizing the cost function are given for discrete-time switched linear systems, including optimal continuous controller and discrete-time controller. The infinite-horizon optimal quadratic regulations of discrete-time switched linear system will be investigated in the future.

Acknowledgments

The authors wish to acknowledge the reviewer for his comments and suggestions, which are invaluable for significant improvements of the readability and quality of the paper. The authors would like to acknowledge the National Nature Science Foundation of China for its support under Grant no. 60964004, 60736022, 61164013 and 61164014, China Postdoctoral Science Foundation for its support under Grant no. 20100480131, Young Scientist Raise Object Foundation of Jiangxi Province, China for its support under Grant no. 2010DQ01700, and Science and Technology Support Project Plan of Jiangxi Province, China for its support under Grant no. 2010BGB00607.