Research Article  Open Access
Improving Performance of Evolutionary Algorithms with Application to Fuzzy Control of Truck BackerUpper System
Abstract
We propose a method to improve the performance of evolutionary algorithms (EA). The proposed approach defines operators which can modify the performance of EA, including Levy distribution function as a strategy parameters adaptation, calculating mean point for finding proper region of breeding offspring, and shifting strategy parameters to change the sequence of these parameters. Thereafter, a set of benchmark cost functions is utilized to compare the results of the proposed method with some other wellknown algorithms. It is shown that the speed and accuracy of EA are increased accordingly. Finally, this method is exploited to optimize fuzzy control of truck backerupper system.
1. Introduction
Evolutionary algorithms (EA) are usually exploited as the first candidates for hard optimization tasks. They can deal with many kinds of problems and cost functions such as multimodal, discrete, constraint on variables, high dimensionality, and stochastic cost functions; consequently, they are suitable for many applications. In the case of evolutionary computation, there are four historical paradigms that have served as the basis for much of the activity in this field, genetic algorithms (Holland, 1975) [1], genetic programming (Koza, 1992) [2], evolutionary strategies (Recheuberg, 1973) [3], and evolutionary programming (EP) [4]. The basic differences between the paradigms lie in the nature of the representation schemes, the reproduction operators, and selection methods [5].
These methods have drawn much attention to the research community in conjunction with the parallel and/or distributed computations. EP, in particular, was studied initially as a method for generating artificial intelligence [6, 7], since it is stable kind of EA and has many advantages in optimization multimodal problems. Among all proposed EP algorithms in the literature, classical EP (CEP), fast EP (FEP) [8], and Levy distributed EP (LEP) [6] are most famous variants of EP.
The classical evolutionary programming (CEP) can be presented as follows.(1)Generate the initial population of individuals, and set . Each individual is taken as a pair of real valued vectors, , for all , where ’s are objective variables and ’s are standard deviations for Gaussian mutations (also known as strategy parameters in selfadaptive evolutionary algorithms).(2)Evaluate the fitness score for each individual , for all , of the population based on the objective function, .(3)Each parent , creates a single offspring by for where , and denote the th component of the vectors , and , respectively. denotes a normally distributed onedimensional random number with mean zero and standard deviation one. indicates that the random number is generated anew for each value of . The factors and are commonly set to and .(4)Calculate the fitness of each offspring , for all .(5)Conduct pairwise comparison over the union of parents and offspring , for all . For each individual, opponents are chosen uniformly at random from all the parents and offspring. For each comparison, if the individual’s fitness is no smaller than the opponent’s, it receives a “win.”(6)Select the individuals out of and , for all , that have the most wins to be parents of the next generation.(7)Stop if the halting criterion is satisfied; otherwise, and go to step .
Considering disadvantages arising in performance of the CEP results in searching for new methods. Although some methods have been introduced for dealing with these disadvantages, they have not attained considerable success yet. Moreover, EP family, as thought, has many advantages in dealing with multimodal cost functions, its applications in the real world are not up to mark in comparison with other evolutionary algorithms like genetic algorithm. Hence, and it is worthwhile to investigate new methods which are more successful in dealing with EP’s disadvantages. Many variants of EP have been developed to boost the performance of CEP by changing (1) and (2) in step [6–15]. Most attempts have introduced better distribution function in place of Gaussian mutation function such as FEP. In [8], a Cauchymutationbased EP, called fast EP (FEP), has been proposed, which demonstrated much better performance rather than CEP in term of converging to a nearglobal optimum point on some benchmark mathematical functions. FEP’s success can be attributed to its greater ability to escape from local minima by using Cauchy mutation. At the same time, this will lead to difficulties in coping with some other functions. Larger jumps are beneficial when the current solution is far away from the global optimum or a better optimum, while such large jumps near the global optimum point are undesirable. In short, fast EP (FEP) is similar to CEP, but it uses a Cauchy function instead of Gaussian function mutation as the primary search operator. In [6], the authors proposed LEP (evolutionary programming using the Levy probability distribution) which can be considered as the generalization of both Gaussian and Cauchy mutation EP. Beside CEP and FEP, Narihisa et al. [9, 10] proposed EEP (exponential mutation evolutionary programming) with the mutation operator based on a double exponential probability distribution. It has been shown that the potential efficiency of EEP is compatible to the FEP. The eminent merit of EEP is that the size of search step for the solution search processes is controlled by the distribution parameter of double exponential distribution according to the convergence state for given problems [10]. Alipouri et al. [16, 17] introduced a new approach for EP family. They have shown that using some information remaining from the previous iteration of running algorithm can speed up and improve the performance of the CEP. One piece of information is average point of where parents settled in previous iteration. This information has been considered as inertia weight in (1). This method called moment coefficient EP (MCEP) changes the searching procedure of CEP through adding a new factor to produce and pull offspring toward the gathered point (mean value) of parents. In this method, gather point of individuals is assumed to be estimation of global minima.
In EP, mutation is implemented by adding strategy parameters to variable vectors of parents in order to produce offspring. When one of the strategy parameters takes on a large value, adding it to the related variable causes abrupt change. Hence, the variable grows with large steps and deviates far from the optimum point whereas some of other variables do not sense considerable changes. If this event repeats for some iteration, the variable will go further, and consequently, it slows down EP in some iteration. To avoid such an occurrence, [16] introduced a new method (shifted classical evolutionary programming (SCEP)), which can enhance the performance of classical evolutionary programming.
In this paper, speed and accuracy characterizations of EA are improved using cost and coordination information. The proposed approach defined operators which can modify performance of EA: Levy distribution function for strategy parameters adaptation, calculating mean point for finding proper region for breeding offspring, and shifting strategy parameters to change the sequence of these parameters. Thereafter, a set of benchmark cost functions are used to compare the results of the proposed method with some other known algorithms. Finally, this modified approach is exploited to optimize fuzzy control of truck backerupper system.
The organization of this paper is as follows: Section 2 explains the EP variants; Section 3 introduces the weighted shifted Levy distributed evolutionary programming (WSLEP); and the main results are presented In Section 4. Application of the proposed method to fuzzy control of truck backerupper system is investigated in Section 5. Finally, we conclude in Section 6.
2. Evolutionary Programming’s Variants
2.1. Classical Evolutionary Programming (CEP)
The general form of CEP follows a twostep process of selection and variation in a population. Following initialization of a population, the fitness of each individual in the population is scored with respect to an arbitrary fitness function. In general, selection is applied as a tournament wherein the fitness of each individual in the population is compared against the fitness of a random set of other individuals in the same population. A “win” is recorded for an individual when individual’s fitness equals or exceeds that of another in the tournament set. Individuals are then ranked with respect to the number of wins, and those with the highest number of wins over some threshold are selected as parents for the next generation. Parents are randomly varied to generate offspring, and the fitness of each member in the population is reevaluated. This process is repeated for a userspecified number of generations [18].
Offspring () and strategy parameters () are updated in each generation by (1) and (2), respectively. This process has confronted with three drawbacks. (1)In EP, mutation is performed by adding strategy parameters to the coordinate of parents. Classical evolutionary programming uses Gaussian distribution function for updating offspring and strategy parameters (mutation parameters). Since Gaussian distribution has some drawbacks, other distribution functions are alternatively used. In the following, some wellknown distribution functions will be introduced.(2)The strategy parameters play main role in deciding the place of offspring. Determining an optimal lower bound for the strategy parameter is essential for the EP algorithm in most applications. The optimal setting of the lower bound depends on the problem and cannot be the same throughout the evolution process. In [16], shifting operator was proposed for solving this problem. In this paper, this method will be used for controlling strategy parameters step sizes.(3)It is obvious that the basic advantages of any algorithm is in deciding the most suitable place for breeding offspring and finding the route toward the global minimum. This goal is implemented by mean point operator in MCEP [17]. In this paper, the mean point operator is used for finding optimal region for breeding new offspring.
Therefore, three methods are utilized simultaneously in order to enhance the capabilities of EP family.
2.2. Fast Evolutionary Programming (FEP)
Yao et al. [8] proposed a Cauchymutationbased EP, called fast EP (FEP). FEP’s success can be attributed to its greater ability to escape local minima by using Cauchy mutation. It demonstrates much better performance than CEP in converging to a nearglobal optimum point on some benchmark mathematical functions but not on all.
2.3. Levy Distributed Evolutionary Programming (LEP)
LEP is a variant of EP which is similar to CEP and FEP. The difference comes from defining mutation function. LEP uses a Levy distribution function in place of Gaussian for mutation function. However, unlike FEP, in LEP only distribution function in (1) has been changed and (2) remains unchanged. So, step sizes in LEP, and CEP for strategy coefficients are similar. Practically, LEP is more similar to CEP than FEP [6].
All three distribution functions, Gaussian, Cauchy, and Levy, are special cases of the stable distributions. These distribution functions can be produced by the following equation: where and are independent normal Gaussian random numbers; in other words, Cauchy with zero mean and variance of 1 has the distribution of a random variable that is the ratio of two independent standard normal random variables. Similarly, Levy has distribution of a random variable that is the ratio of two independent standard normal random variables with different power as defined in (3). For example, where is Gaussian random number with mean and variance , then has a Levy distribution with location 0 and scale .
3. Weighted Shifted Levy Distributed Evolutionary Programming (WSLEP)
In evolutionary computational techniques, solution of problem and the values of the individuals finally converge to a unique point. This convergence is slowly seen over the generations, and the algorithm gradually approaches the optimum point as the number of generation increases. This procedure gives us the idea of adding the average of the individuals in each generation to the algorithm to enhance the convergence speed of the EP. This technique is known as inertia weight method [17]. In order to implement this idea, first, an averaging is carried out on all of the variables. The resulted individual is called “gathered point” or “mean point.” The mean value of winners in the previous iteration is regarded as the gathered point. In next step, each offspring is steered toward the mean point. For implementing this idea, coordinate of gathered point is added to coordinate of offspring. A coordinate of new individuals is calculated by adding three factors: coordinate of gathered, location of previous individuals, and strategy parameter multiplied by Levy random number. Thus, the formula of CEP can be changed for finding new individuals (1), (2) as follows: where mean stands for th element of the gathered point or mean point (Figure 1), and is defined as follows:
In the above equations, total iter is the abbreviation of the total iteration which is specified by user, and iter is the abbreviation of the current iteration. Figure 1 shows breeding region of offspring produced by WSLEP.
In addition, Levy distribution function is used in mutation operator presented in (4), (5) as it has long tails, thus giving offspring big step size to search map quicker which is necessary for some cost functions.
In EP, mutation is implemented by adding strategy parameters to variable vectors of parents in order to produce offspring. When one of the strategy parameters takes on a big value, adding it to the related variable causes abrupt changes in the variable. So, the variable grows with big steps and deviates far from the optimum point, whereas some of other variables do not sense considerable changes. If this event continues for some iteration, the variable will go further. This event slows down EP in some iteration. To avoid such an occurrence, [16] introduces a new method (shifting operator) that can overcome these disadvantages and enhance the performance of classical evolutionary programming. It described a modification of evolutionary programming by using a shifting method to prevent large and small changes to the strategy parameters. This method adds a function to the mutation operator. This function operates on strategy parameters and changes the sequence of these parameters.
There is a question that if the three explained approaches can enhance EP separately, whether the compound method of them (when they are used simultaneously in EP) can be more helpful or not?
Here, the LEP with weighted mean point and optimized shifting strategy parameters approach is used. The pseudo code for used method is given as follows: Choose the initial population of individuals Produce strategy parameters by Levy mutation function Evaluate the cost of each individual in that population.
Repeat this generation until termination: (time limit, sufficient cost achieved, etc.) Update the weighted mean point by (4) Shift strategy parameters times to the left or right ( is a random number) Breed individuals through mutation to give birth to the offspring Evaluate the cost of the offspring Raise tournament to decide the next generation members
End.
4. Main Results
The main results of this paper are presented in two parts. In the first part, the seven algorithms of CEP, FEP, LEP, EEP, MCEP, SCEP, and WSLEP are compared. In this part, it will be shown that the speed and accuracy of the proposed EP improved in terms of reaching the global minimum can improve via the proposed approach. In second part, the proposed algorithm is compared with 4 algorithms: jumping gene (JG) [19], IWPSO (Increasing Inertia Weight PSO) [20], BEA (Bacterial Evolutionary Algorithm) [21], and WSLEP from other wellknown families.
Part 1. 12 wellknown benchmark cost functions are selected to compare the algorithms. Descriptions of these cost functions are presented in Table 1. Some main properties of these functions will be explained, and more details can be found in [8, 22].

These cost functions can be categorized in 3 subgroups: unimodal high dimensional, multimodal high dimensional and low dimensional.
Functions f1–f4 are high dimensionalunimodal problems, which have only one global minimum that is also their only local minimum as well. Functions f5–f8 are high dimensionalmultimodal functions where the number of local minima increases exponentially with the problem dimension. They are regarded as the most difficult class of problems for many optimization algorithms of which CEP has slow convergence on these functions [8]. Functions f9–f12 are lowdimensional functions, which have only a few local minima, however it is not easy to find these points by evolutionary algorithms.
The ranges of the variables and dimensions of the cost functions are chosen according to [8]. For highdimension functions, the number of the variable is (if it is not limited by the cost function). All parameters of algorithms, mentioned in Table 2, are same in all simulation results.
The best algorithm among WSLEP, CEP, FEP, EEP, MCEP, SCEP, and LEP is going to be selected on 12 test functions. To avoid any concurrence in the results, all of the algorithms have been run 20 times, and the averages of the obtained results are presented in Table 3.

All of the algorithms are run until a prespecified generation is reached. The number of generations is shown in the second column of Table 3. In this table, the best result obtained for each cost function is highlighted. These results show that the new proposed method in most cases can reach the global minimum better than other mentioned methods. All of the parameters have been considered to be the same for five methods. Furthermore, all methods have started from the same initial point (first generation). From Table 3, it can be seen that WSLEP shows the best answer for all unimodal functions. These results were expected, since WSLEP has a factor pulling offspring toward the gathered point (mean value) of parents. Unimodal cost functions have only one local minimum which is also their only global minimum. Thus, the mean value can steer offspring below the hole in a short time.
Multimodal functions make it difficult for EP families to find global minimum as they have many local and several global minimums. WSLEP has the best answer in this group too. Good results in this group are very important as there are few methods that have acceptable performance.
Part 2. In this part, four algorithms have been compared with simulation parameters cited in Table 4.

Similar to the previous part, the algorithms have been tested on 12 cost functions. The test has been repeated twenty times, and average results have been considered. Table 5 shows that the average of the best minimum has been found by algorithms in 20 time repetitions. In addition, this table shows that the accuracy of the proposed method is considerable among three other algorithms, that is, accuracy of WSLEP is better than JG, BEA, and IWPSO.

Part 3 (statistical test). In recent years, use of statistical tests for evaluating performance evaluation of a new method has become a widespread technique in computational intelligence. In this section, a procedure is assigned to estimate the differences between several algorithms. It is named the contrast estimation of medians method. This method is very recommendable when the global performance is reflected by the magnitudes of differences between performances of the algorithms [23]. These estimators can be understood as an advanced global performance measure. It is especially useful to estimate the extent to which an algorithm outperforms another one [23].
In the current experimental analysis, the set of estimators of medians is directly calculated from the average error results. Table 6 shows the estimations computed for each algorithm. By observing the rows of the Table 6, it can be seen that the performance of WSLEP is more suitable comparing to other algorithms, since all its related estimators are negative and it achieves low error rates of median estimation; However, FEP achieves higher error rates in this experimental study. Table 5 shows that WSLEP is most similar to MCEP in error rate (performance), as the contrast estimated method shows small difference between error rate of WSLEP and MCEP.

5. Fuzzy Control of Truck BackerUpper System
In this section, the proposed EP method (WSLEP) is exploited to optimize fuzzy control of truck backerupper system. Truck backerupper problem is an excellent test bed for fuzzy control systems. Fuzzy controller, formulated on the basis of human understanding of the process or identified from measured control actions, can be regarded as an emulator of human operator. Controller design, however, may become difficult, especially if the numbers of state variables are large [24]. In this regard, WSLEP is used for finding the fuzzy rules, and the system is routed with a specific start point. The WSLEP algorithm uses different start points for finding optimum rules. Besides, selecting initial points and relating them in defining a cost function for learning rules are important factors. In this study, fuzzy system uses known membership function with unknown rules. Number of rules is 35. For detailed explanation, refer to [25]. The simulated truck backerupper benchmark system is shown in Figure 2.
The truck position is determined by three state variables, , and , where is the angle of the truck with respect to the horizontal line. The control input to this truck is the steeling angle . The truck moves backward by a fixed unit distance every stage. For simplicity, we assume enough clearance between the truck and the loading dock such that does not have to be considered as a state variable [25]. The task is to design a controller such that the system steers to final states . We assume that , , and .
For simulating purposes, we need a mathematical model of the truck. We use the following approximate model [25]: where is the length of the truck, and we assume that in our simulations.
In step , we define 7 fuzzy sets in , 5 fuzzy sets in , and 7 fuzzy sets in , where the membership functions are shown in Figure 3.
Now, rules for fuzzy system must be designed. The cost function is
The error is summed for all five initial points. The five algorithms CEP, FEP, EEP, MCEP, and SCEP along with the proposed algorithm (WSLEP) are used for minimizing the defined cost function by finding proper rules. Figure 4 shows the resulted cost via iteration curve. It can be seen that the proposed algorithm is fast and accurate to minimize the cost function (8).
The minimum founded cost is . Table 7 shows the final fuzzy rule base for the truck backerupper control problem.

Some initial points are chosen to test the designed controller. Figure 5 shows the truck trajectory using the designed fuzzy system for different initial conditions. We see that the fuzzy controller can successfully control the truck to the desired position.
(a)
(b)
(c)
6. Conclusion
In this paper, we proposed a modified approach to increase speed and accuracy of evolutionary algorithms by designing controller using cost and coordination information. The proposed method defined operators which can improve performance of evolutionary algorithms: Levy distribution function for strategy parameters adaptation, calculating mean point for finding proper region for breeding offspring, and shifting strategy parameters to change the sequence of these parameters. Thereafter, a set of benchmark cost functions were used to compare the results of the proposed method with some other known algorithms. It was intuitively obvious that the proposed algorithm was more accurate and fast in finding the value and location of the global minimum in all three groups of the cost functions. Finally, this method was exploited to optimize fuzzy control of truck backerupper system.
References
 J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975. View at: MathSciNet
 J. R. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection, MIT Press, Cambridge, Mass, USA, 1992.
 I. Rechenberg, Evolutionsstrategie: Optimierung Technischer Systeme nach Prinzipien der Biologischen Evolution, FrommannHolzboog, Stuttgart, Germany, 1973.
 L. J. Fogel, A. J. Owens, and M. J. Walsh, Artificial Intelligence through Simulated Evolution, Wiley, New York, NY, USA, 1966.
 S. N. Sivanandam and S. N. Deepa, Introduction to Genetic Algorithms, Springer, Berlin, Germany, 2008. View at: MathSciNet
 C.Y. Lee and X. Yao, “Evolutionary programming using mutations based on the Lévy probability distribution,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 1, pp. 1–13, 2004. View at: Publisher Site  Google Scholar
 X. Yao and Y. Xu, “Recent advances in evolutionary programing,” Journal of Computer Science and Technology, vol. 3, no. 1, pp. 1–18, 2006. View at: Google Scholar
 X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at: Publisher Site  Google Scholar
 H. Narihisa, K. Kohmoto, and K. Katayama, “Evolutionary programming with double exponential probability distribution,” in Proceedings of the 2nd IASTED International Conference on Artificial Intelligence and Applications (AIA '02), pp. 358–363, 2002. View at: Google Scholar
 K. Kohmoto, H. Narihisa, and K. Katayama, “Evolutionary programming using exponential mutation,” in Proceedings of the 6th World Multiconference on Systematics, Cybernetics and Informatics, vol. 11, pp. 405–410, Orlando, Fla, USA, July 2002. View at: Google Scholar
 K. Chellapilla, “Combining mutation operators in evolutionary programming,” IEEE Transactions on Evolutionary Computation, vol. 2, no. 3, pp. 91–96, 1998. View at: Publisher Site  Google Scholar
 D. B. Fogel, L. J. Fogel, and J. W. Atmar, “Metaevolutionary programming,” in Proceedings of the 25th Asilomar Conference on Signals, Systems and Computers, R. Chen, Ed., pp. 540–545, Maple Press, San Jose, Calif, USA, November 1991. View at: Google Scholar
 J. W. Weibull, Evolutionary Game Theory, MIT Press, Cambridge, Mass, USA, 1995. View at: MathSciNet
 X. Yao and Y. Liu, “Scaling up evolutionary programming algorithms,” in Evolutionary Programming VII, vol. 1447 of Lecture Notes in Computer Science, pp. 103–112, Springer, Berlin, Germany, 1998. View at: Google Scholar
 H. Dong, J. He, H. Huang, and W. Hou, “Evolutionary programming using a mixed mutation strategy,” Information Sciences, vol. 177, no. 1, pp. 312–327, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 Y. Alipouri, J. Poshtan, and Ya. Alipouri, “A modification to classical evolutionary programming by shifting strategy parameters,” Applied Intelligence, vol. 38, no. 2, pp. 175–192, 2013. View at: Publisher Site  Google Scholar
 Y. Alipouri, J. Poshtan, Ya. Alipouri, and M. R. Alipour, “Momentum coefficient for promoting accuracy and convergence speed of evolutionary programming,” Applied Soft Computing Journal, vol. 12, no. 6, pp. 1765–1786, 2012. View at: Publisher Site  Google Scholar
 M. H. Sebt, Ya. Alipouri, and Y. Alipouri, “Solving resourceconstrained project scheduling problem with evolutionary programming,” Journal of the Operational Research Society, vol. 64, no. 9, pp. 1327–1335, 2013. View at: Publisher Site  Google Scholar
 W. K. S. Tang, S. T. W. Kwong, and K. F. Man, “A jumping genes paradigm: theory, verification and applications,” IEEE Circuits and Systems Magazine, vol. 8, no. 4, pp. 18–36, 2008. View at: Publisher Site  Google Scholar
 A. M. M. de Oca, T. Stützle, M. Birattari, and M. Dorigo, “A composite particle swarm optimization algorithm,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 1120–1132, 2009. View at: Google Scholar
 H.S. Kim and P. N. Roschke, “Design of fuzzy logic controller for smart base isolation system using genetic algorithm,” Engineering Structures, vol. 28, no. 1, pp. 84–96, 2006. View at: Publisher Site  Google Scholar
 J.T. Tsai, J.H. Chou, and W.H. Ho, “Improved quantuminspired evolutionary algorithm for engineering design optimization,” Mathematical Problems in Engineering, vol. 2012, Article ID 836597, 27 pages, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 3–18, 2011. View at: Publisher Site  Google Scholar
 J. Zhang, W. Wang, Y. Zhao, and C. Cattani, “Multiobjective quantum evolutionary algorithm for the vehicle routing problem with customer satisfaction,” Mathematical Problems in Engineering, vol. 2012, Article ID 879614, 19 pages, 2012. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 L. X. Wang, A Course in Fuzzy Systems and Control, PrenticeHall International, London, UK, 1997.
Copyright
Copyright © 2013 Yousef Alipouri et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.