About this Journal Submit a Manuscript Table of Contents
Applied Computational Intelligence and Soft Computing
Volume 2010 (2010), Article ID 168653, 10 pages
http://dx.doi.org/10.1155/2010/168653
Research Article

Dividing Genetic Computation Method for Robust Receding Horizon Control Design

Department of Computer Science, University of Tsukuba,Tsukuba, Ibaraki 305-8573, Japan

Received 21 August 2009; Revised 2 December 2009; Accepted 15 January 2010

Academic Editor: Oliver Kramer

Copyright © 2010 Tohru Kawabe. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A new robust Receding Horizon Control (RHC) design approach for the sampled-data systems is proposed. The approach is based on a dividing genetic computation of minimax optimization for a robust finite receding horizon control problem. Numerical example is given to show the effectiveness of the proposed method.

1. Introduction

In last few decades, the Receding Horizon Control (RHC) has been widely accepted in the industries [1]. RHC is an online powerful control method which solves a control problem with respect to each sampling frequency [2]. A significant merit of RHC is easy handling of constraints during the design and implementation of the controller [2, 3].

In the standard RHC formulation, the current control action is derived by solving a finite or infinite horizon quadratic cost problem at every sample time using the current state of the plant as the initial state [1]. It is an online optimization with big calculation amount. Then, the RHC has been applied conventionally to systems with relatively slow-moving dynamics such as petrochemical plants and so on. However, recent advance of computer performance has made it possible to use it for systems with relatively fast-moving dynamics, for example, the mechatronics and so on. Therefore, it is important to develop a practical RHC method for such systems.

Incidentally, a drawback of the standard RHC is explicitly lack of robust property with respect to model uncertainties or disturbances since the online minimized cost function is defined in terms of the nominal systems. A possible strategy for realizing the robust RHC is solving the so-called minimax optimization problem, namely, minimization problem over the control input of the performance measure maximized by plant uncertainties or disturbances. An earliest work was proposed by Campo and Morari [4] and Zheng and Morari [5]. Kothare et al. solve minimax RHC problems with state-space uncertainties through LMIs [6]. Cuzzola et al. improve Kothare's method [6] to reduce conservativeness in [7]. Other methods of minimax RHC for systems with model uncertainty can be found in [812]. However, the number of available work of the robust RHC is still limited compared with many methods of robust control synthesis for linear systems being proposed. Main reason of this fact is that the robust (minimax) RHC problem is hard to solve in real-time due to the trade-off with an objective function and constraint conditions in the problem generally. The issue of robust RHC therefore still deserves further attention [2, 3] and the effective approach for the robust RHC is still required, especially in the optimization technique.

Optimization techniques by using evolutionary computation such as Genetic algorithms (GAs) [13, 14] have been recognized to be well suited to many kinds of engineering optimization problem. Multiple individuals can search for multiple solutions in parallel, eventually taking advantage of any similarities available in the family of possible solutions to the problem. Extensions of GAs to many kinds of optimization problems were proposed in several manners [1517]. For example, Schaffer [15] proposed an extension of the simple GA to accommodate vector-valued fitness measures, which he called the Vector Evaluated Genetic Algorithm (VEGA). Moreover, Goldberg [13] firstly proposed the Pareto-based approach as a means of assigning equal probability of reproduction to all nondominated individuals in the population. Fonseca and Fleming [16] proposed a multiobjective ranking method with the Pareto-based fitness assignment. However, it is uncertain whether to be able to apply these methods effectively to the minimax RHC problem.

Therefore, in this paper, a new minimax robust finite RHC design approach based on a new dividing genetic computation for constrained sampled-data systems with structured uncertainties and disturbance is proposed. This approach is one of the general and practical framework of the minimax robust finite RHC problem of bounded constrained systems. Since the dividing genetic computation uniformly controls the convergence of solutions of optimization problems with some trade-off conditions, it can be effectiveforthe minimax RHC problem. Using this approach, we can expect to reduce the conservativeness of control performance and to improve the control performance. Numerical example is given to show these facts.

2. Problem Formulation

The target system in this paper is the sampled-data control system. Hence, the control object with uncertainties is a continuous-time system and the controller is designed in discrete-time. Then, let us consider the following discrete-time LTI (Linear Time-Invariant) state-space model with structured uncertainties and disturbances:

𝑥(𝑘+1)=𝐴+𝐿Δ𝑅𝐴𝑥(𝑘)+𝐵+𝐿Δ𝑅𝐵𝑢(𝑘),𝑦(𝑘)=𝐶𝑥(𝑘)+𝜂(𝑘),(1) where 𝐿Δ𝑅𝐴 and 𝐿Δ𝑅𝐵 denote the structured uncertainties expressed by perturbation of elements in 𝐴 and 𝐵, respectively. 𝐴, 𝐵,𝐶, 𝐿, 𝑅𝐴, and 𝑅𝐵 are constant matrices. Δ (Δ=diag{𝛿1,𝛿2,}) is a structured uncertainties parameters matrix satisfied Δ𝑇Δ𝑟. (𝑟 is a given constant) And 𝑥(𝑘), 𝑢(𝑘), 𝑦(𝑘), and 𝜂(𝑘) denote the state, input, measured output, and disturbance vector, respectively. All these vectors and matrices have appropriate dimensions.

Then, we can transform this system as

𝑥(𝑘+1)=𝐴𝑥(𝑘)+𝐵𝑢(𝑘)+𝐿𝑤(𝑘),(2)𝑧(𝑘)=𝑅𝐴𝑥(𝑘)+𝑅𝐵𝑢(𝑘),(3)𝑦(𝑘)=𝐶𝑥(𝑘)+𝜂(𝑘),(4) where 𝑤(𝑘)=Δ𝑧(𝑘). We assumed that the system is constrained with following conditions with 𝑁1 steps:

𝑤𝑇(𝑘+𝑗)𝑃𝑤𝜂𝑤(𝑘+𝑗)1,𝑇(𝑘+𝑗)𝑃𝜂𝑢𝜂(𝑘+𝑗)1,𝑇(𝑘+𝑗)𝑃𝑢𝑢(𝑘+𝑗)1,(𝑗=0,,𝑁1),(5) where 𝑃𝑤, 𝑃𝜂, 𝑃𝑢 (𝑃𝑤,𝑃𝑢,𝑃𝜂0) are positive symmetric matrices for weights of constraints.

For this systems, to use the RHC, a quadratic performance measure with finite horizon with positive weighting constant matrices 𝑄 and 𝑅 (𝑄,𝑅0) as

𝐽(𝑘)=𝑁1𝑗=0𝑦(𝑘+𝑗+1𝑘)2𝑄+𝑢(𝑘+𝑗𝑘)2𝑅(6) is used. 𝑥(𝑘+𝑗𝑘), 𝑦(𝑘+𝑗𝑘), and 𝑢(𝑘+𝑗𝑘) are the predicted state of the plant, the predicted output of the plant and the future control input at time 𝑘+𝑗, respectively.

Then, the robust RHC is need to solve the following minimax optimization problem at each step 𝑘:

min𝑢(𝑘+𝑗𝑘)max9𝑤(𝑘+𝑗𝑘),𝜂(𝑘+𝑗𝑘)𝐽(𝑘)subjectto𝑤𝑇(𝑘+𝑗)𝑃𝑤𝑢𝑤(𝑘+𝑗)1,𝑇(𝑘+𝑗)𝑃𝑢𝜂𝑢(𝑘+𝑗)1𝑇(𝑘+𝑗)𝑃𝜂𝜂(𝑘+𝑗)1(𝑗=0,,𝑁1).(7)

Generally, the following state feedback schema is employed: 𝑢(𝑘+𝑗𝑘)=𝐹𝑘+𝑗𝑥(𝑘+𝑗𝑘)(𝑗=0,1,𝑁1),(8) where 𝐹𝑘+𝑗 is a feedback gain matrix.

Finally, the procedure of robust RHC is summarized as follows. At the current step 𝑘, future state values 𝑥(𝑘+𝑗𝑘)(𝑗=0,,𝑁1) are predicted by using the state space model (2)–(4), and future feedback gain matrix candidates 𝐹𝑘+𝑗(𝑗=0,,𝑁1) are calculated by solving (7) under (8). Only first gain matrix 𝐹(𝑘+1) is employed for an actual control input and the others are discarded. Then, go to next step 𝑘+1 and repeat same operations.

Namely, the robust RHC requires an online minimax optimization. However, it is difficult to solve this problem as it is at each step, generally. Therefore, the method to eliminate the maximization procedure and transform this problem to simple minimization problem is shown in the next section.

3. Transformation of Minimax Finite RHCProblem

Firstly, introducing the following vectors

𝑋=𝑥(𝑘+1𝑘)𝑥(𝑘+2𝑘)𝑥(𝑘+𝑁𝑘)𝑇,𝑌=𝑦(𝑘+1𝑘)𝑦(𝑘+2𝑘)𝑦(𝑘+𝑁𝑘)𝑇,𝑈=𝑢(𝑘𝑘)𝑢(𝑘+1𝑘)𝑢(𝑘+𝑁1𝑘)𝑇,𝑊=𝑤(𝑘𝑘)𝑤(𝑘+1𝑘)𝑤(𝑘+𝑁1𝑘)𝑇,Λ=𝜂(𝑘𝑘)𝜂(𝑘+1𝑘)𝜂(𝑘+𝑁1𝑘)𝑇(9)

and using state space equations (2)–(4) recursively, we can derive

𝑋=𝐴𝑥(𝑘)+𝐵𝑈+𝐿𝑊,𝑌=𝐶𝐴𝑥(𝑘)+𝐶𝐿𝑊+Λ,(10) where

𝐴=𝐴𝐴2𝐴𝑁1𝑇,𝐴𝐵=𝐵00𝐴𝐵𝐵0𝑁2𝐵𝐴𝑁3,𝐴𝐵𝐵𝐿=𝐿00𝐴𝐿𝐿0𝑁2𝐿𝐴𝑁3.𝐿𝐿(11)

Hence, we can transform the minimax problem (7) to

min𝑈𝛾max𝑊,ΛΠ𝛾,subjectto𝑤𝑇(𝑘+𝑗)𝑃𝑤𝑢𝑤(𝑘+𝑗)1𝑇(𝑘+𝑗)𝑃𝑢𝑢𝜂(𝑘+𝑗)1𝑇(𝑘+𝑗)𝑃𝜂𝜂(𝑘+𝑗)1(𝑗=0,,𝑁1),(12) where 𝛾>0 (scalar parameter) and 𝑃𝑖 is defined as follows:

Π=𝐴𝑥(𝑘)+𝐵𝑈+𝐿𝑊+Λ2𝑄+𝑈2𝑅,,.𝑄=𝑄𝟎𝟎𝑄𝑅=𝑅𝟎𝟎𝑅(13)

To eliminate the maximization procedure, we have to remove 𝑊 and Λ terms in the first constraint. For this, in the first place, following basis for all variables and transformation matrices are defined,

𝜁=𝑥(𝑘)𝑊𝑇Λ𝑇1𝑇,(14)𝑈=𝐻𝑢𝜁,(15)𝑌=𝐻𝑦𝜁,(16)Λ=𝐻𝜂𝐻𝜁,(17)1=1𝜁𝑇𝐻1𝜁,(18) where 𝐻𝑢𝐹,𝐻=𝐴𝐹𝐿𝐹0𝑦𝐶,𝐻=𝐴𝐶𝐿𝐼0𝜂,𝐻=𝟎𝟎𝐼01,=001(19) and where

𝐼=(identitymatrixwithappropriatedimension),𝐹=𝐹𝑘000𝐹𝑘+1000𝐹𝑘+𝑁1.(20) By using these, we can express the first constraint condition of problem (12): max𝑊,Λ𝐻𝑦𝜁2𝑄+𝐻𝑢𝜁2𝑅𝐻1𝜁𝑇𝜆𝐻1𝜁.(21) Please take notice that both the left side and the right side of this inequality are expressed by the quadratic forms and they have positive scalar values. Hence, if the inequality is held by maximum values of 𝑊 and Λ in left side, this inequality must be held by any other values of them. This fact means that we can eliminate the maximization procedure in the first constraint. We can only check the following condition instead of the first constraint of problem (12):

𝐻𝑦𝜁2𝑄+𝐻𝑢𝜁2𝑅𝐻1𝜁𝑇𝜆𝐻1𝜁.(22)

In the second place, 𝐻𝑤(𝑗) is defined. This matrix picks out the suitable block from 𝑊 and satisfies the relation of 𝑤(𝑘+𝑗)=𝐻𝑤(𝑗)𝜁. Then, we can derive 𝐻𝑤(𝑗)𝜁𝑇𝑃𝑤𝐻𝑤(𝑗)𝜁𝐻1𝜁𝑇𝐻1𝜁(𝑗=0,,𝑁1).(23) For the constraints of 𝜂, 𝑢 and 𝑧, we can derive the following relations in the same way:

𝐻𝜂(𝑗)𝜁𝑇𝑃𝜂𝐻𝜂(𝑗)𝜁𝐻1𝜁𝑇𝐻1𝜁,𝐻𝑢(𝑗)𝜁𝑇𝑃𝑢𝐻𝑢(𝑗)𝜁𝐻1𝜁𝑇𝐻1𝜁(𝑗=0,,𝑁1).(24)

Furthermore, by using (14)–(21), all constraints in minimax problem (12) can be transformed into

subjectto𝜁0;𝜁𝑇𝐻𝑇1𝜆𝐻1𝐻𝑇𝑥𝑄𝐻𝑥𝐻𝑇𝑢𝑅𝐻𝑢𝜁𝜁0𝑇𝐻𝑇1𝐻1𝐻𝑤(𝑗)𝑇𝑃𝑤𝐻𝑤(𝑗)𝜁𝜁0𝑇𝐻𝑇1𝐻1𝐻𝑢(𝑗)𝑇𝑃𝑢𝐻𝑢(𝑗)𝜁𝜁0𝑇𝐻𝑇1𝐻1𝐻𝜂(𝑗)𝑇𝑃𝜂𝐻𝜂(𝑗)𝜁0(𝑗=0,,𝑁1).(25)

Then, we can transform the original minimax problem (7) to the following one by using S-procedure [18]:

min𝐹𝛾subjectto𝐻𝑇1𝛾𝐻1𝐻𝑇𝑦𝑄𝐻𝑦𝐻𝑇𝑢𝑅𝐻𝑢𝑁1𝑗=0𝜏𝑤𝑗𝑆𝑤𝑗+𝜏𝑢𝑗𝑆𝑢𝑗+𝜏𝜂𝑗𝑆𝜂𝑗0(𝑗=0,,𝑁1),(26) where

𝑆𝑤𝑗=𝐻𝑇1𝐻1𝐻𝑤(𝑗)𝑇𝑃𝑤𝐻𝑤(𝑗),𝑆𝑢𝑗=𝐻𝑇1𝐻1𝐻𝑢(𝑗)𝑇𝑃𝑢𝐻𝑢(𝑗),𝑆𝜂𝑗=𝐻𝑇1𝐻1𝐻𝜂(𝑗)𝑇𝑃𝜂𝐻𝜂(𝑗),(27) and where 𝜏𝑤𝑗, 𝜏𝑢𝑗, 𝜏𝜂𝑗, and 𝜏𝑧𝑗 are positive semidefinite scalars. It must be noted that this transformation satisfies only a sufficient condition of S-procedure, since S-procedure is not the so-called “lossless” in this case. We cannot therefore avoid that the design results are slightly conservative. Nevertheless, we can expect the reduction of conservativeness in design result by this technique in contrast with the results by preexisting methods, because the conservativeness caused by S-procedure is too small to put a matter for practical purposes.

Finally, using “Schur-complement” [19], we can transform the minimax optimization problem (7) into the following problem:

min𝐹𝛾𝐻subjectto𝑇1𝛾𝐻1Σ𝐻𝑇𝑦𝐻𝑇𝑢𝐻𝑦𝑄10𝐻𝑢0𝑅1𝜏0,𝑤𝑗,𝜏𝑢𝑗,𝜏𝜂𝑗0(𝑗=0,,𝑁1),(28) where

Σ=𝑁1𝑗=0𝜏𝑤𝑗𝑆𝑤𝑗+𝜏𝑢𝑗𝑆𝑢𝑗+𝜏𝜂𝑗𝑆𝜂𝑗.(29)

It is known that this problem has trade-off with the objective function and the constraint condition. Therefore, the fast method of finding nondominated solutions with respect to the trade-off as a lot as possible is needed. Then, the method using dividing genetic computation is shown in the next section.

4. RHC Method with Adaptive DA Converter Using Dividing Genetic Computation

4.1. Dividing Genetic Computation

To find the best possible nondominated solutions for the RHC problem (28), a partial convergence of nondominated solutions must be avoided. Therefore, a dividing method which uniformly controls the distribution of solutions is proposed. The proposed method assigns all nondominated individuals to prespecified regions. An example of the dividing strategy in two objective minimizing problem is shown in Figure 1. The dividing genetic computing method consists of following procedure. First, the objective space is divided into prespecified regions. The edge points of the whole region corresponds to the best solutions for each objective function. In Figure 1, the individuals 𝑝1 and 𝑝7 match them. Then, the fitness 𝑓𝑖 of the individual 𝑝𝑖 is defined as 𝑓𝑖=1/𝑛𝑖. The value of 𝑛𝑖 denotes the number of nondominated solutions in the identical region with the individual 𝑝𝑖. In the example, the fitness of the individuals illustrated in the figure corresponds to the following values (𝑓1,𝑓2,𝑓3,𝑓4,𝑓5,𝑓6,𝑓7)=(1/3,1/3,1/3,1,1,1/2,1/2). In the proposed evolutionary computing method, let us define a neighborhood to every individual as follows: two objective functions of a multiobjective problem are selected by using prespecified selective probabilities. Individuals are arranged on the two-dimensional coordinates, and the neighborhood of an individual is calculated by using the relative distance between all individuals.

168653.fig.001
Figure 1: Dividing strategy.

The crossover operator is made locally in each neighborhood in parallel. Even if the fitness of an individual is relatively very high in a population, it can spread over the succeeding populations only through an overlap of the neighborhood. This prohibits a rapid increase of an relatively high-performance individual, and then, the population diversity is favorably maintained. The evolutionary operators are defined as follows.

(a)The selection is done by considering the number of individuals in the 2-dimensional objective space. That is, the fitness Γ𝑖 of the individual 𝑝𝑖 is defined as Γ𝑖=1/𝑛𝑖. The value of 𝑛𝑖 denotes the number of individuals in the identical region with the individual 𝑝𝑖. The proportional fitness method is employed in the selection process.(b)BLX-𝛼 method is employed for crossover. The mate of crossover is chosen randomly in the neighborhood.(c)The real-code string representation is employed for candidate solution.(d)Mutation is designed to perform random exchange; that is, it selects some bits randomly in an individual and exchanges their values. Boundary mutation and nonuniform mutation are used to avoid the premature convergence of the solutions.

The procedure of dividing genetic computation approach consists of the following steps.

Step 1. Set a generation number 𝑡=0. Randomly generate an initial population 𝑃(𝑡) of 𝑀 individuals.

Step 2. Calculate the fitness of each individual in the current population according to the distribution of the objective space.

Step 3. Select 𝑀 individuals according to above fitness; then the mate of the individuals is chosen randomly in the neighborhood.

Step 4. Generate a new population 𝑃(𝑡) from 𝑃(𝑡) by using a crossover operator.

Step 5. Apply a mutation operator to the newly generated population 𝑃(𝑡).

Step 6. Calculate the fitness both of 𝑃(𝑡) and 𝑃(𝑡).

Step 7. Select 𝑀 individuals from all population members on the basis of the fitness.

Step 8. If a terminal condition is satisfied, stop and return the best individuals. Otherwise set 𝑡=𝑡+1 and go to Step 2.

In this procedure, update of the current population size is always constant 𝑀. Here, to avoid the rapid loss of genetic diversity, multiple equivalent individuals are eliminated from the current population.

As a result of exploratory experiments using the multiobjective ranking [16], the following facts have been obtained by comparison with the standard genetic algorithm (GA).

(i)By using the proposed method, the solutions are widely distributed in the trade-off surface, and the search performance does not deteriorate significantly.(ii)The standard GA approaches cause the partial convergence of the solutions because of stochastic errors in the iterative process.(iii)It is clear that the proposed method can seek for the widely distributed solutions in comparison with the standard GA.

Therefore, it is judged that the proposed dividing genetic computation method is expected to be effective for the minimax RHC problem.

4.2. Interpolation Using Predictive Control Inputs

Generally, the interpolation of samples in the DA conversion is executed by a convolution of samples by using sampling function. In the case of sampled-data control system, to interpolate the current interval, the future sample is need. But, the information about future sample is unable to be obtained in the present time. Hence, we need to wait to obtain this future sample information. As a result, the time-delay is occurred. However, in the case of controlling systems with relatively fast-moving dynamics, such as robots or vehicles, the method with long time-delay is unable to be applied. That is the point. Therefore, a new idea to use the predictive control inputs obtained by RHC for interpolation is proposed.

In RHC, the optimal control inputs {̂𝑢(𝑘𝑘),̂𝑢(𝑘+1𝑘),,̂𝑢(𝑘+𝑁1𝑘)} are calculated in each step, and only the first control input ̂𝑢(𝑘𝑘) is used as a real control input. Therefore, we consider to use the other optimal control inputs {̂𝑢(𝑘+1𝑘),̂𝑢(𝑘+2𝑘),} as virtual future sampling points. Actually, it is only necessary to use the optimal control inputs which are need for interpolation according to the sampling function. Figure 2 shows an example of this way by using the 2nd-order spline function for interpolations. Only ̂𝑢(𝑘+1𝑘) is used as a virtual future sampling point in this case.

168653.fig.002
Figure 2: Interpolation based on a 2nd-order spline sampling function using predictive future control inputs.

Hence, by using the predictive control inputs for interpolation, it becomes possible to reduce the time-delay in the DA conversion, and the total waiting-time is just only computation time of optimization in current step.

Of course, one needs to take account that there is a difference between virtual future sampling points and real sampling points like ̂𝑢(𝑘+1𝑘)𝑢(𝑘+1) in future step. However, we consider that this point is not a critical problem because the influence on interpolated waveform due to prediction error is not so big compared to the scale of prediction error. Although the differentiability of each sampling function is lost at sampling points, this also does not become a critical problem compared to the zero-order hold, and it is possible to keep a certain level of smoothness.

4.3. Adaptive DA Converter

The spline functions provide various sampling functions with all kinds of orders. Therefore, we consider switching the spline functions optimally according to the system status in the adaptive DA converter. In this paper, we use the spline functions with the order 𝑚=0,1,2 as sampling functions. Namely, in the case of 𝑚=0, the sampling function is equivalent to the staircase function. In the case of 𝑚=1, it is the 1st-order piecewise polynomial function, and in 𝑚=2, 2nd-order one as shown in Figure 3. Appropriate selecting the values of 𝑚 according to the object enables to deal with DA conversion flexibly and precisely in the interpolation operation. Although the interpolation is more precisely in the case of using the spline function with 𝑚=3 or more, it is difficult to apply to fast-moving dynamic systems due to the bigger amount of calculation. Therefore we use only the spline functions with the order 𝑚=0,1,2.

168653.fig.003
Figure 3: Sampling functions Ψ and their interpolations (𝑚=1,2,3) (𝜏: sampling interval).

The interpolated signals in the closed-open interval [𝑘𝜏,(𝑘+1)𝜏) using these sampling functions are obtained as follows:

𝑢(𝑡)=𝑘+1𝑙=𝑘𝑢(𝑙)1,2𝑢Ψ(𝑡𝑙𝜏)(𝑚=0,1),(𝑡)=𝑘+2𝑙=𝑘1𝑢(𝑙)3[𝑐]Ψ(𝑡𝑙𝜏)(𝑚=2),(30) where Ψ()is sampling function as shown in Figure 3, and where 𝑢(𝑡) and 𝑢(𝑙) are analog signal and digital signal, respectively.

Figure 4 shows the entire controlled system with the proposed RHC and the adaptive DA converter. As shown in Figure 4, the adaptive DA converter has internal model with sampling interval 𝜏/𝑑. Please take notice that this internal model 2 is different from interval model 1 in which sampling interval is just 𝜏. The interval to be interpolated is also partitioned to 𝑑 sections, and the partitioning points 𝑢𝑚(𝑗;𝑘),(𝑗=1,2,,𝑑1) on interpolated waveforms are used for the selection of parameter 𝑚, that indicates the degree of spling sampling functions:

168653.fig.004
Figure 4: Proposed RHC systems with adaptive DA converter.

The partition points 𝑢𝑚(𝑗;𝑘) are calculated as follows,

𝑢𝑚(𝑗;𝑘)=𝑘+𝜑11=𝑘𝜑𝑢(𝑙)𝑚𝜓𝜏(𝑘1)𝜏+𝑑𝑗𝑙𝜏(𝑗=1,2,,𝑑1),(31) where 𝜑 is the number of samples which the sampling function needs for interpolation, and it is adjusted according to the sampling function.

Figure 5 shows the difference of the interpolation and partition points according to the sampling function with 𝑚=0,1,2 in the case of 𝑑=5. How to select the values of 𝑚 in each interval is summarized as follows. Each value of sample on the partition point is calculated from the state-space equation (2) in the current interval. The evaluation values using 𝐽(𝑘) in (6) are calculated in each 𝑚. Then, the parameter 𝑚 whose evaluation value is the smallest is selected for this interval.

168653.fig.005
Figure 5: Interpolation ways (𝑑=5).

From several test simulation results, it has been obtained that the partition number 𝑑=5 is most appropriate by the trade-off between computation time and precision.

5. Numerical Example

In this section, an example that illustrates the effectiveness of the proposed method is given. The example is adopted from the benchmark problem in [20] as shown in Figure 6.

168653.fig.006
Figure 6: Coupled spring-mass system.

Although the original system consists of a two-mass-spring system without noise for output, the bounded noise is added to demonstrate the proposed method. The state-space equations are obtained as follows:

𝑑𝑥𝑑𝑡1𝑥(𝑘+1)2𝑥(𝑘+1)3(𝑥𝑘+1)4=(𝑘+1)100.100100.10.1𝐾/𝑚10.1𝐾/𝑚1100.1𝐾/𝑚20.1𝐾/𝑚2×𝑥011𝑥(𝑘)2𝑥(𝑘)3𝑥(𝑘)4+00(𝑘)0.1/𝑚10𝑢(𝑘),(32) where 𝑚1 and 𝑚2 are the two masses and 𝐾 is the spring constant. State variables 𝑥1 and 𝑥2 are the positions of the two masses, respectively, and 𝑥3 and 𝑥4 are their velocities. Now we assume the following perturbations of 𝑚2 and 𝐾:

𝑚2𝑚21𝑚2,10𝐾𝐾0.5𝐾𝐾max,(33)

and 𝑚1 is constant equaled to 1. The weighting matrices are fixed as 𝑄=𝐼 and 𝑅=0.5. The constraint condition of input |𝑢(𝑘)|1 must be satisfied. This means that 𝑃𝑢=1. And the constraints of external disturbance, 𝜂, are set 𝑃𝜂=36.0. A prediction horizon of the RHC is set 𝑁=10.

Furthermore, the following GA parameter specifications are used in the simulation. These values have been selected from several tests(see Table 1).

tab1
Table 1

The results as follows are the best ones in having repeated 20 times.

Figure 7 shows the closed-loop response of the output. From this figure, we can say that the proposed method has good robust performance.

168653.fig.007
Figure 7: Closed-loop response.

Figure 8 shows the change of the parameter 𝑚 of the adaptive DA converter. From this figure, we can see that the spline function with 𝑚=0 (staircase function) is likely to be selected when the control input stays flat, and the function with 𝑚=1 (piecewise linear function) is selected when the control input changes rapidly. The function with 𝑚=2 (piecewise quadratic function) is also likely to be selected when the control input changes smoothly. These are natural results, but please take notice that the tiny difference of control input causes a big influence on the result, in the case of the systems with fast-moving dynamics. Therefore, it is important to select the appropriate value of 𝑚 in each interval flexibly for better control performance. Moreover, longer the sampling interval, the improvement of control performance is expected to be more conspicuous using the proposed method.

168653.fig.008
Figure 8: Illustration of Switching ways of sampling functions in the interpolation.

To show the fact that the proposed method can reduce the conservativeness, the maximum values of 𝐾max by the proposed method, the technique in [6], and the one in [7] are searched, respectively, within the limits of holding the feasibility of the robust RHC problem. Then results obtained are indicated in Table 2.

tab2
Table 2: Upper bound values of 𝐾max.

From Table 2, we can see that the result by proposed method is much better than the one in [6] and slightly better than the one in [7]. We can see therefore that the proposed method with the dividing genetic computation can realize the less conservative control performance than the preexisting methods in [6, 7].

To examine the performance of the dividing genetic computation method, comparisons of computation time with NSGA-II [21] and SPEA-II [22] are done.

Computation environment using a software, “MATLAB 7.8.0”, is asshown in Table 3.

tab3
Table 3

Then, maximum computation time of the feedback gain matrix 𝐹 per each step in the robust RHC by using the dividing genetic computation method is 0.04 second. On the other hand, the times by NSGA-II and by SPEA-II are indicated by the same value 0.02 second. Although the proposed method takes twice time compared with NSGA-II and SPEA-II, it can be said that the proposed method can be practicable in such a time.

Moreover, the upper bound values of 𝐾max by the proposed method, by NSGA-II [21], and by SPEA-II [22] are calculated, respectively. As a result, values are obtained: 79.47 by proposed method as above mentioned, 78.98 by NSGA-II, and 79.28 by SPEA-II, respectively. Judging from this, we can say that the proposed method might be somewhat excellent for the reducing the conservativeness of control performance compared with them.

6. Conclusion

The new approach of minimax robust finite RHC method based on dividing genetic computation for constrained sampled-data control systems with structured uncertainties and disturbance has been proposed. At the same time, a numerical example is given to show that the proposed method improves the control performance.

Although the dividing genetic computation method is able to uniformly control the convergence of solutions of the minimax RHC design problems, more performance investigation of this method is compared with other GA methods, for example, NSGA-II, SPEA-II, and so on, or other heuristic optimization methods, for example, Particle Swarm Optimization [23], Ant Colony Optimization [24], and so on, as future works.

Moreover, I also need to develop the selection method of the best sampling function according to the control object, and need to make sure the effectiveness of the proposed method in various control objects as future works.

Appendix

The proposed minimax approach is easily extended to systems with other constraints which are specified by ellipsoidal bounds, for example, state estimation errors, and so on as follows.

In the case that 𝑥(𝑘) is not full measured, we need to estimate 𝑥(𝑘), where the bound of estimation error 𝑒(𝑘)=𝑥(𝑘)̂𝑥(𝑘) is guaranteed as an ellipsoidal set as

𝑒𝑇(𝑘)𝑃𝑒𝑒(𝑘)1,(A.1) where 𝑃𝑒 is a positive symmetric matrix for weight. This specification of estimation error is standard one. Now we introduce 𝐻𝑒 as

𝐻𝑒=100̂𝑥(𝑘),(A.2) and then the relation of 𝑒(𝑘)=𝐻𝑒𝜁 is hold. And the condition below is also held:

𝜁𝑇𝐻𝑇1𝐻1𝐻𝑇𝑒𝑃𝑒𝐻𝑒𝜁0.(A.3) Since this condition has same form as other constraints in (7), we can include this condition into the condition of problem (26) by using a new variable 𝜏𝑒. Furthermore, in this case, a new output equation with measurement noise 𝜓(𝑘) is needed as follows instead of (4):

𝑦𝜓(𝑘)=𝐶𝑥(𝑘)+𝜓(𝑘)𝑇(𝑘)𝑃𝜓𝜓.(𝑘)1(A.4) We can also include this constraint into the condition of problem (26) by using a new variable 𝜏𝜓.

Acknowledgment

The author gratefully thanks the anonymous reviewers for their valuable comments and editorial assistance.

References

  1. C. E. García, D. M. Prett, and M. Morari, “Model predictive control: theory and practice—a survey,” Automatica, vol. 25, no. 3, pp. 335–348, 1989. View at Scopus
  2. D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert, “Constrained model predictive control: stability and optimality,” Automatica, vol. 36, no. 6, pp. 789–814, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  3. A. Bemporad and M. Morari, “Robust model predictive control: a survey,” in Robustness in Identification and Control, A. Garulli, A. Tesi, and A. Vicino, Eds., vol. 245 of Lecture Notes in Control and Information Sciences, pp. 207–226, Springer, London, UK, 1999.
  4. P. J. Campo and M. Morari, “Robust model predictive control,” in Proceedings of the American Control Conference, pp. 1021–1026, 1987.
  5. Z. Q. Zheng and M. Morari, “Robust stability of constrained model predictive control,” in Proceedings of the American Control Conference, pp. 379–383, 1993.
  6. M. V. Kothare, V. Balakrishnan, and M. Morari, “Robust constrained model predictive control using linear matrix inequalities,” Automatica, vol. 32, no. 10, pp. 1361–1379, 1996. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. F. A. Cuzzola, J. C. Geromel, and M. Morari, “An improved approach for constrained robust model predictive control,” in Proceedings of the European Control Conference, 2001.
  8. J. C. Allwright and G. C. Papavasiliou, “On linear programming and robust model-predictive control using impulse-responses,” Systems and Control Letters, vol. 18, no. 2, pp. 159–164, 1992. View at Scopus
  9. J. H. Lee and Z. Yu, “Worst-case formulations of model predictive control for systems with bounded parameters,” Automatica, vol. 33, no. 5, pp. 763–781, 1997. View at Scopus
  10. A. Bemporad, “Reducing conservativeness in predictive control of constrained systems with disturbances,” in Proceedings of the 37th IEEE Conference on Decision and Control (CDC '98), vol. 2, pp. 1384–1389, December 1998.
  11. A. Bemporad and A. Garulli, “Output-feedback predictive control of constrained linear systems via set-membership state estimation,” International Journal of Control, vol. 73, no. 8, pp. 655–665, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. P. O. M. Scokaert and D. Q. Mayne, “Min-max feedback model predictive control for constrained linear systems,” IEEE Transactions on Automatic Control, vol. 43, no. 8, pp. 1136–1142, 1998. View at Scopus
  13. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison Wesley, New York, NY, USA, 1989.
  14. Z. Michalewicz and C. Janikow, Genetic Algorithms + Data Structures = Evolution Programs, Springer, London, UK, 1992.
  15. J. D. Schaffer, “Multiple objective optimization with vector evaluated genetic algorithms,” in Proceedings of the 1st International Conference on Genetic Algorithms (ICGA '85), pp. 93–100, 1985.
  16. C. M. Fonseca and P. J. Fleming, “An overview of evolutionary algorithms in multiobjective optimization,” Evolutionary Computation, vol. 3, no. 1, pp. 1–16, 1995.
  17. D. H. Loughlin and S. Ranjithan, “The neighborhood constraint method: a genetic algorithm-based multiobjective optimization technique,” in Proceedings of the 7th International Conference on Genetic Algorithms (ICGA '95), pp. 666–673, 1995.
  18. S. Boyd and C. H. Barratt, Linear Controller Design: Limits of Performance, Prentice-Hall, Upper Saddle River, NJ, USA, 1991.
  19. K. Zhou, J. C. Doyle, and K. Glover, Robust and Optimal Control, Prentice-Hall, Upper Saddle River, NJ, USA, 1996.
  20. B. Wie and D. S. Bernstein, “Benchmark problems for robust control design,” Journal of Guidance, Control, and Dynamics, vol. 15, pp. 1057–1059, 1992.
  21. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at Publisher · View at Google Scholar · View at Scopus
  22. E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: improving the strength Pareto evolutionary algorithm,” in Proceedings of the International Congress on Evolutionary Methods for Design, Optimization and Control with Applications to Industrial Problems (EUROGEN '01), K. Giannakoglou, D. Tsahalis, J. Periaux, P. Papailou, and T. Fogarty, Eds., pp. 95–100, 2001.
  23. C. A. Coello Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256–279, 2004. View at Publisher · View at Google Scholar · View at Scopus
  24. T. Stützle and H. H. Hoos, “MAX-MIN ant system,” Future Generation Computer Systems, vol. 16, no. 8, pp. 889–914, 2000. View at Publisher · View at Google Scholar · View at Scopus