Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2015, Article ID 734957, 11 pages
http://dx.doi.org/10.1155/2015/734957
Research Article

Benchmarking RCGAu on the Noiseless BBOB Testbed

1School of Mathematics, Statistics and Computer Science, College of Agriculture, Engineering and Science, University of KwaZulu-Natal, Westville, South Africa
2Department of Computer Sciences, Faculty of Science, University of Lagos, Lagos, Nigeria
3School of Computational and Applied Mathematics, Faculty of Science and TCSE, Faculty of Engineering and Built Environment, University of the Witwatersrand, Johannesburg, South Africa

Received 19 July 2014; Accepted 9 November 2014

Academic Editor: Albert Victoire

Copyright © 2015 Babatunde A. Sawyerr et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

RCGAu is a hybrid real-coded genetic algorithm with “uniform random direction” search mechanism. The uniform random direction search mechanism enhances the local search capability of RCGA. In this paper, RCGAu was tested on the BBOB-2013 noiseless testbed using restarts till a maximum number of function evaluations (#FEs) of 105 × D are reached, where D is the dimension of the function search space. RCGAu was able to solve several test functions in the low search dimensions of 2 and 3 to the desired accuracy of 108. Although RCGAu found it difficult in getting a solution with the desired accuracy 108 for high conditioning and multimodal functions within the specified maximum #FEs, it was able to solve most of the test functions with dimensions up to 40 with lower precisions.

1. Introduction

The simple genetic algorithm (GA) introduced by Holland is a probabilistic algorithm based on the theory of natural selection by Charles Darwin. GA mimics the evolutionary process through the creation of variations in each generation and the survival of the fittest individuals through the blending of genetic traits. Individuals with genetic traits that increase their probability of survival will be given more opportunities to reproduce and their offspring will also profit from the heritable traits. Over the period of time these individuals will eventually dominate the population [1, 2].

GA consists of a set of potential solutions called chromosomes, a selection operator, a crossover operator, and a mutation operator. A chromosome is a string of zeros (0s) and ones (1s). It is a metaphor of the biological chromosome in living organisms. The zeros (0s) and ones (1s) are called genes. A gene is the transfer unit of heredity. It contains genetic traits or information that is passed on from a parent solution to its offspring. The selection operator selects solutions for mating based on the principle of “survival of the fittest.” The crossover operator generates new solution pairs called children by combining the genetic materials of the selected parents. The mutation operator is an exploratory operator that is applied, with low probability, to the population of chromosomes to sustain diversity. Without the mutation operator, GAs can easily fall into premature convergence [1, 3].

The simple GA was designed to work on binary strings and it is directly applicable to pseudoboolean objective functions. However, most real life problems are represented as continuous parameter optimization problems. A decoding function was designed to map the solutions from binary space to the real-valued space. This decoding process can become prohibitively expensive for binary string GAs especially when the problem dimension increases [1, 3]. To tackle this problem real-coded genetic algorithms were introduced [4].

Real-coded genetic algorithms (RCGAs) use real-valued vectors to represent individual solutions. Surveys show that several variants of RCGAs have been proposed and used to solve a wide range of real life optimization problems. Some recent examples can be found in [1, 410].

Over the last three decades, researchers have continuously improved the performance of RCGAs through hybridization. RCGAs have been hybridized with other optimizers such as Nelder-Mead algorithms [11], simplex method [12], quadratic approximation [13], and pattern search [1416].

In this paper, a set of noiseless testbed from the black-box optimization benchmarking (BBOB) workshop is used to benchmark RCGAu, a hybrid real-coded genetic algorithm that consists of “uniform random direction” local search technique.

The RCGAu algorithm is presented in Section 2, Section 3 provides the CPU timing for the experiments, Section 4 presents the results and discussion, and finally Section 5 concludes the paper with some recommendations.

2. The RCGAu Algorithm

RCGAu is a hybrid RCGA with a simple derivative-free local search technique called “uniform random direction” local search method. The local search technique operates on all individuals after the mutation operator has been applied to the population of individuals.

The RCGAu used in this work is a modified version of the RCGAu used in [16, 17]. It consists of five major operators, namely, tournament selection, blend- crossover, nonuniform mutation, uniform random direction local search method, and a stagnation alleviation mechanism. Algorithm 1 shows the RCGAu algorithm.

Algorithm 1: The RCGAu Algorithm.

The notations used in this paper are defined as follows.

denotes the population of individual solutions at time , is the size of , represents the standard deviation of the fitness values of all solutions , is the mating pool containing the parent solutions, is the population of offspring solutions obtained after applying crossover on the parents in , is the crossover probability, is the resultant population of solutions after applying mutation on , is the mutation probability, and is the population of solutions obtained after ulsearch has been applied to , where ulsearch denotes the uniform random direction local search. Also, , a very small positive value [18].

The evolutionary process in Algorithm 1 starts by initializing from the search space . The domain of is defined by specifying upper and lower limits of each th component of ; that is, and , . Next, the fitness value , , is calculated and the population diversity of is measured by calculating the standard deviation of .

If and the global optimum has not been found, then of is refreshed with newly generated solutions using the function perturb (). is refreshed by sorting the solutions according to their fitness values and preserving the top of . The remaining of are replaced with uniformly generated random values from the interval and the resultant population; is created. is the size of the mating pool and . If, on the other hand, then tournament selection is applied on to create an equivalent mating pool .

The tournament selection scheme works by selecting number of solutions uniformly at random from , where is the tournament size and . The selected individuals are compared using their fitness values and the best individual is selected and assigned to . This procedure is repeated times to populate .

After the mating pool has been created, blend- crossover is applied to a pair of parent solutions if a randomly generated number drawn uniformly from the interval is greater than the specified crossover probability threshold . Blend- crossover creates a pair of offspring () from the interval , as follows:where , = , is a uniform random number drawn from the interval , and . The new pair () is then copied to the set ; otherwise the pair is copied to .

Then the nonuniform mutation [4] is applied to the components of each member of with probability, , as follows:where is a uniformly distributed random number in the interval . and are the upper and lower boundaries of , respectively. The function given below takes a value in the interval :where is a uniformly distributed random number in the interval , is the maximum number of generations, and is a parameter that determines the nonuniform strength of the mutation operator. The mutated individual is then copied to the set ; otherwise is copied to .

Then ulsearch is applied on each solution with the aim of performing local searches around the neighborhood of each solution. ulsearch works by randomly selecting a solution and creating a trial point usingwhere is a step size parameter and is a directional cosine with random componentsWhere  . There are cases when the components of the trial point generated by (4) fall outside the search space during the search. In these cases, the components of are regenerated usingwhere and is the corresponding component of the randomly selected solution .

The step size parameter, , is initialized at time according to [15, 16] bywhere . The idea of using (7) to generate the initial step length is to accelerate the search by starting with a suitably large step size to quickly traverse the search space and as the search progresses the step size is adaptively adjusted at the end of each generation, , bywhere is the number of Euclidean distances between nearest points to the mean and of a set of randomly selected distinct points .

After the trial point has been created, it is evaluated and compared with . If , then is used to replace ; otherwise the search direction is changed by changing the sign of the step length. The new step length is used to recalculate a new trial point. After a new trial point has been recalculated and evaluated, it is used to replace with , if ; otherwise is retained.

At the end of ulsearch, is updated with to form and elitism is used to replace the worst point in with the best point in because the generational model is the replacement strategy adopted in this work [19].

3. Experimental Procedure and Parameter Settings

The experimental setup was carried out according to [20] on the benchmark functions provided in [21, 22]. Two independent restart strategies were used to restart RCGAu whenever stagnates or when the maximum number of generations is exceeded and is not found. For each restart strategy, the experiment is reinitialized with an initial population which is uniformly and randomly sampled from the search space [6, 18].

Two stopping conditions used for the restart strategies are as follows.(i)A test for stagnation is carried out to check if the best solution obtained so far did not vary by more than during the last generations as in [6].(ii)A test is carried out to check if the maximum number of generations is satisfied and is not found.

The parameters used for RCGAu on all functions are(i)population size = , where is the problem dimension;(ii)maximum number of evaluations ;(iii)tournament size ;(iv)crossover rate ;(v)mutation rate ;(vi)nonuniformity factor for the mutation ;(vii)elitism ;(viii)crafting effort [20].

4. CPU Timing Experiment

The CPU timing experiment was conducted for RCGAu using the same independent restart strategies on the function for a duration of seconds on an AMD Turion (tm) II Ultra Dual-Core Mobile CPU processor, running at  GHz under a -bit Microsoft Windows Professional service pack 1 with  GB RAM usable and Matlab   .

The time per function evaluation was , , , , , and times seconds for RCGAu in dimensions , , ,, , and , respectively.

5. Results

The results of the empirical experiments conducted on RCGAu according to [20] on the benchmark functions given in [21, 22] are presented in Figures 1, 2, and 3 and in Tables 1 and 2.

Table 1: Expected running time (ERT in number of function evaluations) divided by the best ERT measured during BBOB-2009. The ERT and, in braces, as dispersion measure, the half difference between 90 and 10 percentile of bootstrapped run lengths appear in the second row of each cell, the best ERT in the first. The different target -values are shown in the top row. #succ is the number of trials that reached the (final) target .
Table 2: ERT loss ratio versus the budget (both in number of -evaluations divided by dimension). The target value for a given budget FEvals is the best target -value reached within the budget by the given algorithm. Shown is the ERT of the given algorithm divided by best ERT seen in GECCO-BBOB-2009 for the target , or, if the best algorithm reached a better target within the budget, the budget divided by the best ERT. Line: geometric mean. Box-Whisker error bar: 25–75%-ile with median (box), 10–90%-ile (caps), and minimum and maximum ERT loss ratio (points). The vertical line gives the maximal number of function evaluations in a single trial in this function subset. See also Figure 3 for results on each function subgroup.
Figure 1: Expected number of -evaluations (ERT, lines) to reach ; median number of -evaluations (+) to reach the most difficult target that was reached not always but at least once; maximum number of -evaluations in any trial (); interquartile range with median (notched boxes) of simulated run lengths to reach ; all values are divided by dimension and plotted as values versus dimension. Also, are shown. Numbers above ERT-symbols (if appearing) indicate the number of trials reaching the respective targets. The light thick line with diamonds indicates the respective best results from BBOB-2009 for . Horizontal lines mean linear scaling and slanted grid lines depict quadratic scaling.
Figure 2: Empirical cumulative distribution functions (ECDF), plotting the fraction of trials with an outcome not larger than the respective values on the -axis. Left subplots: ECDF of the number of function evaluations (FEvals) divided by search space dimension , to fall below with , where is the first value in the legend. The thick red line represents the most difficult target value . Legends indicate for each target the number of functions that were solved in at least one trial within the displayed budget. Right subplots: ECDF of the best achieved for running times of function evaluations (from right to left cycling cyan-magenta-black) and final -value (red), where and Df denote the difference to the optimal function value. Light brown lines in the background show ECDF for the most difficult target of all algorithms benchmarked during BBOB-2009.
Figure 3: ERT loss ratios (see Table 2 for details). Each cross (+) represents a single function and the line is the geometric mean.

Figure 1 shows the performance of RCGAu on all the noiseless problems with the dimensions 2, 3, 5, 10, 20, and 40. RCGAu was able to solve many test functions in the low search dimensions of and to the desired accuracy of . It is able to solve most test functions with dimensions up to at lowest precision of .

Although RCGAu found it difficult in getting a solution with the desired accuracy for high conditioning and multimodal functions within the specified maximum it was able to solve with dimensions up to , and with dimensions up to , and with dimensions up to , and , , , , and with dimensions up to .

In Figure 2, the left subplot graphically illustrates the empirical cumulative distribution function (ECDF) of the number of function evaluations divided by the dimension of the search space, while the right subplot shows the ECDF of the best achieved . This figure graphically shows the performance of RCGAu in terms of function evaluation.

Table 1 presents the performance of RCGAu in terms of the expected running time (ERT). This measure estimates the run time of RCGAu by using the number of function evaluations divided by the best ERT measured during BBOB 2009 workshop. This benchmark shows that RCGAu needs some improvement in terms of performance.

6. Conclusion

The performance of RCGAu on the suite of noiseless black-box optimization testbed has been average on a number of problems but it has excelled in solving functions , , , , and . Studies have currently been carried out to find out why RCGAs do not efficiently solve highly conditioned problems. Further modifications to RCGAs are needed to exploit the full strength of evolutionary processes.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. T. Back, Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, Oxford University press, New York, NY, USA, 1996. View at MathSciNet
  2. J. H. Holland, Adaptation in Natural and Artificial Systems, An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence, MIT Press, Cambridge, Mass, USA, 1975. View at MathSciNet
  3. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, Mass, USA, 1989.
  4. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 1996.
  5. M. M. Ali and A. Törn, “Population set-based global optimization algorithms: some modifications and numerical studies,” Computers & Operations Research, vol. 31, no. 10, pp. 1703–1725, 2004. View at Publisher · View at Google Scholar · View at Scopus
  6. Y.-C. Chuang and C.-T. Chen, “Black-box optimization benchmarking for noiseless function testbed using a direction-based RCGA,” in Proceedings of the 14th International Conference on Genetic and Evolutionary Computation (GECCO '12), pp. 167–174, Philadelphia, PA, USA, July 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. K. Deep and M. Thakur, “A new crossover operator for real coded genetic algorithms,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 895–911, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  8. K. Deep and M. Thakur, “A new mutation operator for real coded genetic algorithms,” Applied Mathematics and Computation, vol. 193, no. 1, pp. 211–230, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  9. K. Deep and M. Thakur, “A real coded multi parent genetic algorithms for function optimization,” Applied Mathematics and Computation, vol. 1, no. 2, pp. 67–83, 2008. View at Google Scholar
  10. P. Kaelo and M. M. Ali, “Integrated crossover rules in real coded genetic algorithms,” European Journal of Operational Research, vol. 176, no. 1, pp. 60–76, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  11. R. Chelouah and P. Siarry, “Genetic and Nelder-Mead algorithms hybridized for a more accurate global optimization of continuous multiminima functions,” European Journal of Operational Research, vol. 148, no. 2, pp. 335–348, 2003. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. J. Andre, P. Siarry, and T. Dognon, “An improvement of the standard genetic algorithm fighting premature convergence in continuous optimization,” Advances in Engineering Software, vol. 32, no. 1, pp. 49–60, 2001. View at Publisher · View at Google Scholar · View at Scopus
  13. K. Deep and K. N. Das, “Quadratic approximation based hybrid genetic algorithm for function optimization,” Applied Mathematics and Computation, vol. 203, no. 1, pp. 86–98, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  14. W. E. Hart, Adaptive global optimization with local search [Ph.D. thesis], University of California, San Diego, Calif, USA, 1994.
  15. B. A. Sawyerr, M. M. Ali, and A. O. Adewumi, “A comparative study of some real-coded genetic algorithms for unconstrained global optimization,” Optimization Methods & Software, vol. 26, no. 6, pp. 945–970, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. B. A. Sawyerr, Hybrid real coded genetic algorithms with pattern search and projection [Ph.D. thesis], University of Lagos, Lagos, Nigeria, 2010.
  17. B. A. Sawyerr, A. O. Adewumi, and M. M. Ali, “Real-coded genetic algorithm with uniform random local search,” Applied Mathematics and Computation, vol. 228, pp. 589–597, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. B. A. Sawyerr, A. O. Adewumi, and M. M. Ali, “Benchmarking projection-based real coded genetic algorithm on BBOB-2013 noiseless function testbed,” in Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation (GECCO '13), pp. 1193–1200, Amsterdam, The Netherlands, July 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. K. A. DeJong, An analysis of the behavior of a class of genetic adaptive systems [Ph.D. thesis], University of Michgan, Ann Arbor, Mich, USA, 1975.
  20. N. Hansen, A. Auger, S. Finck, and R. Ros, “Real-parameter black-box optimization benchmarking 2012: experimental setup,” Tech. Rep., INRIA, 2012. View at Google Scholar
  21. S. Finck, N. Hansen, R. Ros, and A. Auger, “Real-parameter black-box optimization benchmarking 2009: presentation of the noiseless functions,” Tech. Rep. 2009/20, Research Center PPE, 2009. View at Google Scholar
  22. N. Hansen, S. Finck, R. Ros, and A. Auger, “Real-parameter black-box optimization benchmarking 2009: noiseless functions definitions,” Tech. Rep. RR-6829, INRIA, 2009. View at Google Scholar