This work focuses on the problem of automatic loop shaping in the context of robust control. More specifically in the framework given by Quantitative Feedback Theory (QFT), traditionally the search of an optimum design, a non convex and nonlinear optimization problem, is simplified by linearizing and/or convexifying the problem. In this work, the authors propose a suboptimal solution using a fixed structure in the compensator and evolutionary optimization. The main idea in relation to previous work consists of the study of the use of fractional compensators, which give singular properties to automatically shape the open loop gain function with a minimum set of parameters, which is crucial for the success of evolutionary algorithms. Additional heuristics are proposed in order to guide evolutionary process towards close to optimum solutions, focusing on local optima avoidance.

1. Introduction

It is a well-known fact that there is no general procedure to exactly solve nonlinear nonconvex optimization problems when the solutions belong to continuous solution sets ([1]). In the case of solutions belonging to discrete solution sets, it is always possible to find the optimal solution by a branch and bound type exhaustive search ([2]). Branch and bound techniques can also be applied to continuous solution set problems, specially when combined with interval analysis ([3, 4]). The obtained solution is, in general, only close to optimal, according to a certain accuracy factor. A major drawback of branch and bound techniques is that they are a very costly approach in terms of computation time. The only way to quickly obtain an (approximate) solution to this sort of problem is to use some kind of randomized search algorithm, like random algorithms ([5]) or evolutionary algorithms ([6]) according to the particular problem to be solved.

Quantitative Feedback Theory is a robust frequency domain control design methodology which has been successfully applied in practical problems from different domains [7]. One of the key design steps in QFT, equivalent to controller design, is loop shaping of the open loop gain function to a set of restrictions (or boundaries) given by the design specifications and the (uncertain) model of the plant. Although this step has been traditionally performed by hand, the use of CACSD tools (e.g., the QFT MATLAB Toolbox [8]) has made the manual loop shaping much more simple. However, the problem of automatic loop shaping is of enormous interest in practice, since the manual loop shaping can be hard for the nonexperienced engineer, and thus it has received a considerable attention, specially in the last three decades.

Optimal QFT loop computation is a nonlinear nonconvex optimization problem, for which there is not yet an optimization algorithm which computes a globally optimum solution in a reasonable time, in terms of interactive design purposes. It must be noticed, however, that the work by Nataraj and others on this subject, based on deterministic optimization procedures, combining branch and bound optimization and interval analysis techniques, is very promising (see e.g., [9, 10]).

Other typical approaches to solve this problem have tried to find approximate solutions in different ways. For instance, some authors have simplified the problem somehow, in order to obtain a different optimization problem for which there exists a closed solution or an optimization algorithm which does guarantee a global optimum in a shorter computation time. A trade-off between necessarily conservative simplification of the problem and computational solvability has to be chosen. This is the approach in, for instance, [11, 12]. Some authors have investigated the loop shaping problem in terms of particular structures, with a certain degree of freedom, which can be shaped to the particular problem to be solved. This is the case in [1315]. Another possibility is to use randomized search algorithms, able to directly face nonlinear and nonconvex optimization problems, at the cost of returning an only close to optimal solution, and not guaranteeing that, in general, the returned solution is not far from the optimum. This is the approach adopted in [16, 17], based on evolutionary algorithms.

Evolutionary algorithms are computationally demanding, specially as the dimension of the search space increases. In this paper the authors study the use of evolutionary algorithms-based optimization, proposing the addition, with respect to previous work, of some heuristics, very much specific to the particular problem under consideration, which help to improve obtained solutions accuracy and computation time needed to obtain these solutions. In this sense, a good structure for the compensator, in terms of using a reduced set of parameters, but with a rich frequency domain behavior, is of crucial importance. This is the main heuristic proposed in this paper: to use evolutionary algorithms together with a flexible structure, able to get a close to optimum solution, but with a reduced number of parameters. In previous work, the compensator has been fixed to a rational structure, with a finite (but no necessarily small) number of zeros and poles. In this work, the main contribution is to introduce a fractional compensator that, with a minimum number of parameters, gives a flexible structure in the frequency domain regarding automatic loop shaping. In fact, it can be approximated by a rational compensator, but with a considerably large number of parameters. This dramatic reduction in the number of parameters has shown to be of capital importance for the success of evolutionary algorithms in the solution of the automatic loop shaping problem. Other applied heuristics have to do with including some features in the objective function that guide the evolutionary search towards close to optimum solutions, paying special attention to prevent the search from getting stacked in local minima, which is specially likely to happen in the problem under consideration.

In this work, following [1821] it is considered the particular case of minimum phase open loop gain functions, for which the investigated compensators can give a good structure with a reduced set of parameters. Some first steps towards the nonminimum phase case are given in [18].

From here onwards, the structure of the paper stands as follows. In Section 2 a brief introduction to QFT is given; in Section 3 the proposed solution for the QFT automatic loop shaping problem by evolutionary algorithms is presented; in Section 4 this procedure is applied to a typical benchmark problem. Finally, Section 5 presents the conclusions.

2. Introduction to QFT

The basic idea in QFT ([7]) is to define and take into account, along the control design process, the quantitative relationship between the amount of uncertainty to deal with and the amount of control effort to use. Typically, the QFT control system configuration (see Figure 1) considers two degrees of freedom: a controller , in the closed loop, which manages uncertainty and disturbances affecting or ; and a precompensator, , designed after , used to satisfy tracking specifications.

Consider an uncertain plant, represented as the set of transfer functions , with being the nominal plant and being a set of transfer functions representing uncertainty. For a certain frequency , the template is defined as the Nichols chart (NC) region ([22]) given by

The design of the controller is accomplished in the Nichols chart, by shaping the nominal open loop transfer function, . A discrete set of design frequencies is chosen. Given quantitative specifications on robust stability and robust performance on the closed loop system, the set of boundaries is computed. Each defines the allowed region (conversely, forbidden region ) in NC for , so that (conversely, ) (boundaries satisfaction) implies specification satisfaction by . For example, in Figure 9 we have the following.

(i)The open lines horizontally crossing the Nichols plane from to are performance boundaries. In this example, for each , is defined as the region above . (ii)The closed line around the point (0 dB, ) is a robust stability boundary, defining (conversely ) as the region outside (conversely inside) . In this example, this boundary is a Universal High-Frequency Boundary (UHFB), a stability boundary called universal because the region inside it, , is not forbidden only for a particular frequency , but for all , so it imposes for all .

The basic step in the design process, loop shaping, consists of the following nonlinear nonconvex constrained optimization problem: design of which satisfies boundaries (constraints) and minimizes the high-frequency gain (optimization criterion—[23]), defined as where is excess of poles over zeros. The comparison in terms of this optimization criterion of two nominal open loops and , with respective excess of poles over zeros and , only makes sense if . The comparison is performed in terms of the loops high-frequency gains, and , respectively.

Note that this criterion is defined irrespective of the particular structure used for . It has been shown ([23]) that for minimum phase this optimum exists, it is unique and, in general, has an infinite number of poles and zeros, and so it is not implementable. For this reason it is referred along this work as the QFT theoretical optimum, , with optimal high-frequency gain .

Since is not practical as a loop design, a more practical definition of optimum can be stated. Define , , , as the set of controller structure parameters to be instantiated. Consider an open loop structure parameterized by , that is, each defines , a particular instance of , with high-frequency gain . Define , the minimum high-frequency gain that can be achieved by any instance. An -optimal transfer function or is defined as an instance such that .

For a given open loop structure, represents the best approach to that can be achieved with that structure. The ability of a structure to achieve shape is called close-to-optimality, defined as . Since computing is in general still a hard optimization problem, and thus is not known, it is common to compute instead suboptimal transfer functions with high-frequency gain as close to as possible, with . This is the approach considered in this work, using evolutionary algorithms to compute an with which approaches as much as possible. A measure of how accurately a certain approaches can only be established in relative terms, that is, by comparing it to another in terms of their high-frequency gains.

The meaning of this optimum definition is minimizing the cost of feedback related with sensor noise amplification at high frequencies by minimizing at high frequencies, which, due to phase/magnitude relations given by Bode integrals ([24]), is equivalent to minimize , subject to satisfaction. Figures 2 and 3 show the typical shape of a QFT optimal loop , with polygonal line simplified versions of and boundaries in . For frequencies , minimization is limited by performance boundaries. For frequencies , minimization is only limited by the UHFB (which makes tightly follow the right and bottom sides of the UHFB) and the specified excess of poles over zeros . Note that is not directly established in the specifications, but indirectly, as a consequence of boundaries in .

Note the shape of in the frequency range ; see Figures 2 and 3. It is called Bode step ([18, 23, 25], first defined in [25]), a frequency band where has constant magnitude and a fast phase decrease (according to minimization objective, once the UHFB does not constrain this objective), surrounded by frequencies where is constant ( and , resp.) and decreases. Bode step is very typical of QFT optimal loops, and when designing a suboptimal loop , the designer has to try to make exhibit a Bode step-like shape in order to succeed in accurately approaching . Note that a Bode step in can be interpreted as tightly following the bottom part of the UHFB.

3. QFT Automatic Controller Design by Evolutionary Optimization

The used method for automatic QFT controller design consists of using a fixed structure in the compensator with a certain number of free parameters , , which constitute the solution space of the nonconvex nonlinear global optimization to be performed by evolutionary algorithms. As it was said in the introduction, there are two critical factors which determine the efficiency of such an approach. One is the dimension of the search space, , which exponentially increases the computational cost in terms of time. The second factor is the use of adequate heuristics which guide the evolutionary search towards close to optimum solutions.

The main contribution of this work has to do with the first factor: to use a fractional structure in the compensator is proposed as a key idea to get flexible structures, able to yield close to optimum solutions, but with a reduced . Several structures from literature have been adapted to solve the QFT loop shaping problem, including some structures proposed by the authors in previous works. These structures are introduced in Sections

The second contribution of this work has to do with the second factor. Some ad hoc heuristics, with features very much specific to the particular problem under consideration, have been developed in order to help evolutionary search to improve obtained solutions accuracy and computation time needed to obtain these solutions, specially in terms of local minima avoidance. These heuristics are presented in Section 3.2. Section 3.3 describes the algorithm used for evolutionary optimization, detailing the objective function, which includes these heuristics.

In Section 4 a design example is solved by using these heuristics and all the fractional structures. The results obtained by each fractional structure are compared.

3.1. Fractional Structures
3.1.1. TID

TID controller [26] is a modified version of PID controller, where the proportional term is replaced by a tilted (fractional) term, with transfer function , which permits a better approach to theoretical optimum. The resulting controller, including a low pass filter, is given by


controller [27, 28] is a PID generalization in which both integrator and derivative terms have real order, and respectively. In this work, the multiplicative version of given in [29] will be used, with transfer function CRONE 2 (Section 3.1.3) is a particular case of (3.2), where some parameters are linked, and so flexibility is reduced compared to (3.2).

3.1.3. CRONE-Based Controllers

The CRONE approach [30, 31] defines three generations of fractional controllers based on the use of frequency-band noninteger differentiators. First and second generations (CRONE 1 and CRONE 2) use real non-integer differentiation, whereas third generation (CRONE 3) use complex non-integer differentiation. Three CRONE structures-based compensators are studied. The first one uses the transfer function structure common to CRONE 1 and 2. The basic component of this structure is an order differentiator in the form of the implementable band-defined transfer function. An order band-limited integrator and an order lowpass filter are added to manage accuracy and robustness and control effort problems, being the open loop structure finally defined as The application of the CRONE 2 structure to the QFT problem is quite straightforward.

The second compensator uses CRONE 3 structure, consisting of the substitution of the (real) order integrodifferentiator in CRONE 2 for the real part of a (complex) order integrodifferentiator in the form of the implementable band-defined transfer function with and . The open loop structure in CRONE 3 is finally defined as

The third compensator is decoupled CRONE 3 ([32]), a modified version of CRONE 3 structure (3.5), where some parameters are decoupled in order to obtain higher flexibility. Frequencies and are decoupled by defining new frequencies , and . is decoupled by redefining in as , where is a new free parameter, , and . The new structure to be shaped is For both CRONE 3 and decoupled CRONE 3, a detailed study of the parameters involved, its interrelations and their allowed ranks in order to obtain desired behavior, is developed in [32]. In [33] a CRONE-based structure is proposed for the design of open loops based on Bode optimum-like specifications.

3.1.4. FCT Terms

Fractional order Complex Terms (FCTs) [19, 34] are terms with , corresponding to poles and corresponding to zeros. Different combinations of these terms can be used depending on the problem to be solved. For a typical tracking problem with a minimum phase plant, the structure with , two FCT poles and one FCT zero, achieves very good results.

3.2. Heuristics

Once the problem of a large number of parameters to be optimized has been solved by the use of fractional structures, the main problem is the fact that, due to the typically convex nature of the constraints in the solutions space, the evolutionary search can easily get stacked in local minima. The main goal for the heuristics design has been to help the evolutionary search to quickly avoid these local minima when they are detected.

The objective function used as the criterion for the natural selection of individuals during the evolutionary optimization process, , is a multiobjective real function which returns a to-be-minimized real value related with nonviolation of boundaries (constraint), stability (constraint), and minimization of (to-be-optimized value). Violation of boundaries and lack of stability are, in fact, constraints, but due to how evolutionary algorithms work, they have to be translated into to-be-optimized values, as originally is. A first approach could be to assign a large value to any solution which violates any boundary or is unstable. But this does not differentiate between solutions which violate a certain boundary completely, or only a little bit, which produces a blind in the sense that is not able to look for the way to become out of the violation of a certain given constraint. In order to avoid the evolutionary optimization getting stack in such situations, it is necessary to guide it by establishing, by adequately shaping , that the violation of a certain constraint by a certain solution is not as important as the violation produced by another solution , so . This way, the evolutionary algorithm will prefer the survival of instead of , so that better solutions survive, even if they do not satisfy constraints. This scheme, repeated generation after generation, leads to solutions that do respect constraints. These are some of the components of intended to guide the evolutionary optimization towards solutions satisfying constraints.

(i)For open boundaries, let be an open boundary, and consider that it is defined as , a function that assigns to every phase in NC the magnitude of at that phase. For certain solutions and , it should happen , . Assume that this constrain is not satisfied; that is, is violated, by both and . is considered better than when is closer to than is, in terms of magnitude. More formally, the penalty for this violation included in is a monotonically increasing function of . This choice, made generation after generation, contributes to get solutions such that is over , thus satisfying constraint. (ii)For closed boundaries, let be a closed boundary, and consider that it is defined by , , two functions that assign to every phase in NC the upper and lower magnitudes of at that phase, respectively (bivalued boundary). For certain solutions and , it should happen and , . Assume that this constrain is not satisfied; that is, is violated, by both and . is considered better than when is closer to or than is, in terms of NC Euclidean distance. This choice, made generation after generation, contributes to get solutions such that is out of , thus satisfying constraint.

Another important consideration, in order to avoid the evolutionary search getting stacked, is an which gives more importance to constraints satisfaction than to minimization. This way, the evolutionary search first searches for valid solutions, and then, once they have been achieved, it optimizes among them.

3.3. Algorithm

This section describes the optimization algorithm which has been implemented for controllers synthesis. It consists on the use of commercial evolutionary algorithm software (the Genetic and evolutionary algorithm toolbox for use with MATLAB (GEATbx, [35])) together with an ad hoc objective function, programmed by the authors, implementing the heuristics described in Section 3.2.

The evolutionary algorithm is, in particular, a multiple subpopulations evolutionary search algorithm. In this kind of evolutionary search, each subpopulation evolves in an isolated way for a few generations (as it happens in a single population evolutionary algorithm). After that, one or more individuals are exchanged between subpopulations. The way this process models species evolution is more similar to nature, compared to single population evolutionary algorithms, which helps to avoid local minima. The basic structure of this kind of algorithm is the following (adapted from [35]):

PROCEDURE Multipopulation_Evolutionary_Search_Algorithm (search_space)BEGIN PROCEDUREINITIALIZATION, consisting on:creation of initial population // set of randomly chosen// points from search space,// grouped in species evaluation_of_individuals =OBJECTIVE_FUNCTION(initial_population);WHILE NOT(any termination criteria is met)BEGIN WHILENEW POPULATION GENERATION: // consisting on... fitness assignment selection;recombination;mutation;evaluation_of_offspring = OBJECTIVE_FUNCTION(offspring);reinsertion; migration;competition;END WHILE;RESULT = best individual in last population;END PROCEDURE;

This is the code which implements the objective function described in Section 3.2:

PROCEDURE Objective_Function (set_of_individuals)BEGIN PROCEDUREFOR EACH individual IN set_of_individuals, indexed as i SBV = Compute_Stability_Boundary_Violation(individual); SRV = Compute_Stability_Ray_Violation(individual); PBV = Compute_Performance_Boundary_Violation(individual); NBV = Compute_Noise_Boundary_Violation(individual); PSP = Compute_Phases_Separation_Penalization(individual); MPP = Compute_Maximum_Phase_Penalization(individual);HFG = Compute_High_Frequency_Gain(individual);Constraints_Violation = BIG_VALUE * (weights[1] * SBV + weights[2] * SRV + weights[3] * PBV + weights[4] * NBV); Technical_Penalizations = BIG_VALUE * (weights[5] * PSP+ weights[6] * MPP); Optimization_Criterion = weights[7] * HFG;RESULT[i] = Constraints_Violation + Technical_Penalizations + Optimization_Criterion;END FOR;END PROCEDURE;

Values SBV, SRV, PBV and NBV are related with a direct violation of any of the problem constraints. This is the reason they are amplified by BIG_VALUE so that any solution satisfying constraints is preferred (has a lower objective value) compared to any other which does not satisfy any of the constraints.

Values and are related with technical issues, which are necessary to consider in order to avoid nonreal solutions. Nonreal solutions are solutions that, due to intrinsic limitations of discrete computation, such as the need to consider a finite number of points to represent boundaries and loops, would satisfy the algorithm discrete translation of continuous constraints, but not the original continuous constraints. These values are also amplified by BIG_VALUE.

Value is directly related with the optimization criterion, high-frequency gain minimization. It is not amplified by BIG_VALUE, and so it is only considered, in practice, when no constraint is violated.

Weights are conceived to give more importance to a certain penalization compared to others, but have not been used for the moment; that is, all of them are equal.

As explained in Section 3.2, SBV, SRV, PBV, and NBV values correspond to a gradient, related with how much a certain boundary is violated, so that the objective function distinguishes different degrees of violation of a certain constraint, so that it helps the evolutionary algorithm to approach solutions which do not violate that constraint. For instance, consider the following MATLAB code, corresponding to the implementation of the function Compute Performance Boundary Violation (CPBV).

PROCEDURE Compute_Perform_Boundary_Violation(loop_points,boundaries)loop_phases = loop_points.phases; loop_magnitudes = loop_points.magnitudes; bnd_phases = boundaries.phases; bnd_magnitudes = boundaries.magnitudes; nphases = length(bnd_phases); bnd_phases_indexes = mod(round(loop_phases*(nphases/360)+nphases),nphases); bnd_phases_indexes = nphases - bnd_phases_indexes; minimum_allowed_magnitudes = bnd_magnitudes(bnd_phases_indexes); magnitude_differences = loop_magnitudes - minimum_allowed_magnitudes; negative_differences = magnitude_differences .* (magnitude_differences < 0); squared_differences = negative_differences.2;RESULT = sum(squared_differences);END PROCEDURE;

For each design frequency , this procedure checks whether the loop is below the performance boundary at that frequency, , and in that case quantifies how much below it is, which is called magnitude difference in the code. These magnitude differences are squared, so that bigger magnitude differences are considered much worse than small differences. After that, these values are summed up to obtain the final boundary violation indicator, so that this indicator doubles when the loop violates boundaries at two frequencies, compared to a single violation, is triple when there are three violating frequencies, and so on. For an example of CPBV function behaviour, consider Figures 4, 5, and 6, representing loops , and , respectively, corresponding to the first loop design in Section 4 (based on a PID controller). In Figure 4, performance boundaries are violated by at design frequencies rad/s and rad/s, by about 5 dB. The function yields a boundary violation indicator . Using BIG_VALUE = , this produces an objective function value around , which is a big penalization. The evolutionary algorithm will prefer individuals (loops) with lower objective function values. For instance, in Figure 5 performance boundaries are violated only at design frequency  rad/s, by 1 or 2 dB. CPBV function yields a boundary violation index , which corresponds to an objective function value , still a big penalization, but lower than . This way, the evolutionary algorithm tends to move the loop, at each frequency , out of (forbidden area) and towards (allowed area), producing a design like in Figure 6, where no boundary is violated, and so and is contributed only by the optimization criterion, with no penalization by constraints violation, yielding .

Figure 7 shows the evolution, for an optimization of the loop structure shown in Figures 4, 5 and 6, of , where is an index for each generation in the evolutionary algorithm execution, and is the best individual (loop) in generation , best in the sense of minimizing the objective function. During the first three generations there are some boundary violations, which produce big objective function values. These values decrease as generation index increases, due to the effect of the implemented heuristics. From fourth generation there is no boundary violation, so BIG_VALUE is not applied any more, and so from that point the task of the evolutionary algorithm is reduced to get better and better values for the optimization criterion. This process can be better visualized in Figure 8, where scale has been adapted for this purpose. This figure permits checking how the objective function converges to a certain value, in this case around 152, from a certain generation, in this case around generation 130.

4. Design Example

To illustrate the behavior of the proposed optimization method, the QFT Toolbox for MATLAB [8] Benchmark Example number 2 is used. It has also been used, for instance, in [16, 17]. In [16], two rational loops are designed, a second-order one, with  dB, and a third-order one, with  dB, with in both cases. A common will be used along this section, so that loops can be compared in terms of . In those structures which cannot get by themselves, a term is added to fix .

For comparison purposes, a classical PID (TID with ) based design (Figure 9) is also considered, with  dB and parameters in (3.1): , , , , , .

The result obtained with TID controller is shown in Figure 10, with dB, improving PID. Parameters in (3.1) are , , , , , , .

improves TID result by another 12 dB, with  dB, Figure 11. Parameters in (3.2), with term (4.1), are , , , , , , , .

CRONE 2 structure yields the loop shown in Figure 12, with corresponding dB, and parameters in (3.3), are , , , , , . This result is slightly worse than 's, which could be expected, since CRONE 2 is the same structure as , but with some parameters linked, which implies less flexibility.

In Figure 13 it is shown a design using CRONE 3 structure, with dB, and parameters in (3.5), are , , , , , , , . The result is slightly better than using CRONE 2.

In Figure 14 it is shown a design using decoupled CRONE 3 structure, with  dB, and parameters in (3.6), are , , , , , , , , , , , , . This result significantly improves the original CRONE 3 design (in more than 20 dB).

Finally, in Figure 15 it is shown the result obtained with the FCT structure (3.8), with  dB, which improves decoupled CRONE 3 result by more than 10 dB. Parameters for (3.8), are , , , , , , , , , .

Figure 15 shows a comparison of the noise amplification at the plant input, , achieved by each design. Table 1 summarizes the results obtained in terms of . As it can be easily checked, there is a direct correlation between and . Note how the most flexible the used structure is (and so the best it achieves) and the better Bode step-like shape its associated loop achieves.

5. Conclusions

An automatic QFT controller design procedure, based on evolutionary algorithms optimization on the parameters of a fixed structure, has been proposed. The key idea behind this proposal is the introduction of a structure with few parameters (a must in order to get good results from evolutionary optimization) but, at the same time, flexible enough, thanks to its fractional nature, to get results which are close to the optimum. Fractional structures have been proposed as ideal candidates. Additional heuristics, focused on guiding the evolutionary search to prevent it from getting stacked in local minima, have been proposed. These structures and heuristics have achieved very good results in terms of QFT classical optimization criterion.


This work was partially supported by the Spanish Government DPI2007-66455-C02-01 project, which is greatly appreciated by the authors.