NatureInspired Algorithms and Applications: Selected Papers from CIS2013
View this Special IssueResearch Article  Open Access
Dazhi Jiang, Zhun Fan, "The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators", Mathematical Problems in Engineering, vol. 2015, Article ID 474805, 15 pages, 2015. https://doi.org/10.1155/2015/474805
The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators
Abstract
At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 wellknown benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
1. Introduction
At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. As a result, inevitably, current evolutionary algorithms in general incorporate human preconceptions in their designs. This situation encourages us to ask the following questions: are there any algorithms that can design evolutionary algorithms automatically? A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In the 13th century, a French scientist, Villand de Honnecourt proposed a perpetual motion machine for the first time. However, the truth shows that it is impossible to make such kind of entity in the real world. But in the world of computer for the machine to automatically design algorithms, several automatic algorithm design techniques have been proposed in recent years to overcome this limitation. For example, hyperheuristics include search methods that automatically select and combine simpler heuristics, creating a generic heuristic that is used to solve more general instances of a given type of optimization problem. Hence, hyperheuristics search in the space of heuristics, instead of in the problem solution space [1], raising the level of generality of the solutions produced by the hyperheuristics. Ant Colony algorithms are populationbased methods widely used in combinatorial optimization problems. Taveres and Pereira [2] proposed a grammatical evolution [3] approach to automatically design ant colony optimization algorithms. The grammar adopted by this framework has the ability to guide the learning of novel architectures, by rearranging components regularly found on human designed variants. Furthermore, Taveres and Pereira [4] proposed a strongly typed genetic programming [5] approach to automatically evolve the communication mechanism that allows ants to cooperatively solve a given problem. For these two applications, results obtained with several TSP instances show that the evolved pheromone update strategies are effective and exhibit a strong generalization capability and are competitive with human designed variants. For rule induction algorithms, Pappa and Freitas [6] proposed the use of grammarbased genetic programming (GGP) to automatically evolve rule induction algorithms. The experiments involving 11 data sets show that novel rule induction algorithms can be automatically generated using GGP. Oltean and Grosan [7] used multi expression programming (MEP) [8] technique to evolve evolutionary algorithm, in which each MEP chromosome encodes multiple EAs.
Although the aforementioned automatic algorithms have different emphases on research objectives and contents, one thing in common is that they use automatic method to design algorithms, which shows that automatic programming method can build algorithms to solve problems automatically.
As the core components of the evolutionary algorithms, the genetic operators, such as mutation and combination, are more variable and complicated compared with other components, such as initialization and selection, in the algorithm framework. Furthermore, in terms of the design of the new algorithms, the most difficult and important part of the previous work is focused on the design of genetic operators. In our work, we also focus on designing genetic operators. This paper proposes a novel approach to design genetic operators in evolutionary algorithm, namely, the evolutionary algorithm based on automatic designing of genetic operators (EA^{2}DGO), which uses MEP with a new encoding scheme [9] to automatically generate genetic operators in the evolutionary algorithm to solve simulated problems.
Organization of this paper is as follows. In Section 2, the commonality of three classical evolutionary algorithms is introduced and discussed and then in Section 3, the general scheme of designing genetic operators is presented. In Section 4, the framework of EA^{2}DGO is described, explaining the mechanism of automatically designing genetic operators. Experimental verifications are presented in Section 5. Section 6 gives conclusions and discussions.
2. Three Classical Evolutionary Algorithms
It is important to investigate what expressions of genetic operators are amenable to automatic design, for which we can get inspirations from analyzing the standard genetic algorithm (SGA) [10], particle swarm optimization (PSO) [11] and differential evolution (DE) [12, 13].
In classical GA’s crossover arithmetic operator, the new vectors are generated by linear combination of two different individuals. PSO and DE can be considered extended algorithms of the SGA. In the operator of combination of PSO, the particle’s personal experience and population’s best experience influence the movement of each particle. In the common operator of mutation in DE algorithm, the new vector is generated by the difference of two individuals in population and sum with another individual according to certain rules. These three algorithms are different but share some common characteristics. In the following subsections, the equations describing the operations of the operators of SGA, PSO, and DE algorithms are analyzed in detail.
2.1. Genetic Algorithm
GA with the real coding usually adopts arithmetic crossover as one of the genetic operators. Take total arithmetic crossover, for example, assume is a constant number which presents the size of population, and is the dimension of parameter vectors. The population is then expressed as and is the generation. Select two individuals , from population according to certain rule, where , and , the child vector and could be generated and expressed, respectively, as the following:where and , .
2.2. Particle Swarm Optimization
PSO, like other evolutionary algorithms, is also a populationbased search algorithm and starts with an initial population of randomly generated solutions called particles. Each particle in PSO has a velocity and a position. PSO remembers both the best position found by all particles and the best positions found by each particle in the search process. For a search problem in a dimensional space, a particle represents a potential solution. The velocity and position of the th dimension of the th particle are updated according to the following equations:where , is the particle’s index, is the position of the th particle, and represents the velocity of th particle. represents the best previous position yielding the best fitness value for the th particle. is the best position discovered by the whole population. and are two random numbers independently generated within the range of , and are two learning factors reflecting the weights of stochastic acceleration terms that pull each particle toward and positions, respectively. , indicates the iterations.
2.3. Differential Evolution
Differential evolution (DE) is a populationbased, direct, robust, and efficient search method. Like other evolutionary algorithms, DE starts with an initial population vector randomly generated in the solution space. Assume that is a constant number which presents the size of population, and is the dimension of parameter vectors, and the population is expressed as , where , and is the generation. The main difference between DE and other evolutionary algorithms, such as GA and PSO, is its new generating method to generate new population vectors. In order to generate a new population vector, three vectors in population are randomly selected and weighted difference of two of them is added to the third one. After crossover, the new vector is compared with a predetermined vector in the population. If the new vector is better than the predetermined one, it replaces it; otherwise, the predetermined vector is copied to the next generation’s population. For a traditional DE, the mutation procedure is illustrated as the following.
For the th vector from generation , a mutant vector is defined bywhere , , , and , and are different. The differential mutation parameter , known as scale factor, is a positive real normally between 0 and 1 but can also take values greater than 1. Generally speaking, larger values for result in higher diversity in the generated population and the lower values lead to faster convergence.
3. General Scheme of Designing Genetic Operators
3.1. The General Characteristics of Genetic Operators
Through the analysis above, the following observations are made.(i)A genetic operator is a formula which is composed of a group of objects (such as and ), arithmetic operators (such as +, −, ), and parameters (such as , , , ).(ii)A formula representing a genetic operator can have many variants. For example, DE and PSO have similar but different formulas, which is again different from the formula of SGA.(iii)While existing evolutionary algorithms (including SGA, PSO, and DE) have different formulas, their genetic operators actually share characteristics (i) and (ii).
According to the observations (i), (ii), and (iii), we could design a scheme to represent the genetic operator for automatic design.
3.2. The Scheme of Genetic Operators
3.2.1. The Encoding Scheme of Genetic Operator Chromosome
Firstly, we need an entity to express the genetic operators which can be distinguished and manipulated by a computer. Genetic programming (GP), gene expression programming (GEP), and multiexpression programming (MEP) are three kinds of methods with focus of generating computer programs automatically for given problems. According to this, a similar chromosome structure , like MEP, is presented to express genetic operators. Where is the chromosome’s index, is the generation, and is the genetic operator chromosome which is composed of a head and a tail. The head contains symbols that represent both functions (elements from the function set ) and terminals (elements from the terminal Set ), whereas the tail contains only terminals. Generally, is composed of arithmetic operators, and is composed of objects and parameters.
Each gene in encodes a terminal or a function symbol. A gene that encodes a function includes pointers towards the function arguments. Function arguments always have indices of higher values than the position of the function itself in the chromosome.
There is little difference compared with chromosome represented by MEP. In MEP chromosome, the function arguments have indices of lower values than the position of function itself. However, both of them are essentially the same. MEP chromosome presented in this way is similar to the GEP chromosome where the tail is constructed with terminal symbol. Further information about relationships between chromosomes of GEP and MEP can be seen in [14].
3.2.2. The Decoding Scheme of Genetic Operator Chromosome
Chromosome translation is obtained by parsing the chromosome rightleft. A terminal symbol specifies a simple expression. A function symbol specifies a complex expression obtained by connecting the operands specified by the argument positions with the current function symbol. As MEP chromosome encodes more than one problem solution, and there is neither practical nor theoretical evidence that one of these expressions is better than the others before fitness calculation. For simplicity, the expression tree expressed by the first symbol is chosen as the chromosome’s final representation.
After decoding, a genotype of could be translated into a phenotype which can be further processed by a computer.
3.2.3. The Characteristics of the Scheme of Designing Genetic Operators
According to the encoding and decoding scheme of genetic operators, two characteristics are essential for automatically designing genetic operators.(i)Changeability: the most different characteristic compared with traditional genetic operators is that its structure could be reconstructed by a computer, which means that the genetic operators could be generated and changed according to the requirements of problem. For example, every gene could be changed into another terminal or function symbol; every function arguments could be changed into another function argument; when the gene or function arguments are changed, the genotype and phenotype of chromosome are transformed.(ii)Adaptability: an automatic way of generating novel formula (using the existing objects, arithmetic operators, and parameters) may lead to a very novel design of evolutionary algorithm which can adapt itself to address problems with dynamic , which means genetic operators are simultaneously searched and designed in the process of problem solving. In the following, we take a snapshot of the chromosomes used to represent the genetic operators of the three classic evolutionary algorithms.
3.3. The Chromosome for Three Classic Evolutionary Algorithms
3.3.1. SGA
If we consider (1) and take an example of , , , and and , then (1) could be expressed by an expression tree (phenotype) as shown in Figure 1.
Take an example with the length of a chromosome as 7, and , the expression tree in Figure 1 could be expressed as a genotype as shown in Table 1.

3.3.2. PSO
For PSO, an example of , , , , , , and is used. The PSO’s particle updating equation can be expressed by an expression tree (phenotype) as shown in Figure 2.
Since (3) and (4) are more complicated equations compared with SGA operators, they need a longer chromosome to express the equivalent genotype. For this example, the length of chromosome is 18, and , a kind of equivalent genotype for phenotype can be expressed as shown in Table 2.

3.3.3. DE
For the mutation operator of DE, if we take an example of , , , , and , the DE’s mutation equation could be expressed by an expression tree (phenotype) shown in Figure 3.
Here the length of chromosome is 7, and . A kind of equivalent genotype for phenotype could be expressed as shown in Table 3.

By analysis above, it is evident that for SGA, PSO, and DE, their genetic operators can be expressed by a specific chromosome, respectively. But the structures of the chromosomes are not changed, or static through the whole process of evolutionary run for the three classic evolutionary algorithms. This is generally true also for many other variants of evolutionary algorithms.
Now the question is “can automatic programming method automatically construct genetic operators of evolutionary algorithms, where operators were automatically generated in the running process of problem solving, rather than predefined?” As we know, the evolutionary algorithms generally have capabilities of selforganizing, selfadapting, and selfregulating, but their genetic operators are normally predetermined. While the algorithms construed by predetermined operators are effective in certain aspects of problem solving, their performances in addressing other issues may not be so competitive. This phenomenon can be explained to some extent by the famous “No Free Lunch” theory [15]. If there is an algorithm framework, in which the genetic operators can automatically adjust themselves during the problemsolving process with the change of the nature of the problems to be solved, the capabilities of selforganizing, selfadapting, and selfregulating of the algorithms can be further enhanced, and the limit imposed by the “NO Free Lunch” theory may be broken. Peng et al. proposed a populationbased algorithm portfolio (PAP) [16], which distributes the time among multiple different algorithms, to decrease the inherent risk associated with the selection of algorithms. However, the algorithms are still predefined in PAP, which is very different compared with our method. In the following, the details of a framework that can automatically design genetic operators in the running process of problem solving will be introduced.
4. Evolutionary Algorithm Based on Automatic Designing of Genetic Operators (EA^{2}DGO)
In the framework of Evolutionary Algorithm based on Automatic designing of genetic operators (EA^{2}DGO), the genetic operators are not predefined by a designer before problem solving but are searched and designed in the process of problem solving. Thus, the framework of the EA^{2}DGO consists of two core components: one is the unit of problem solving (e.g., function optimization), which relates to operations in the problem solution space. The object of this unit is set to find global optimal solutions; and the other is the unit of automatically designing genetic operators, which relates to exploration in the space of genetic operators. The object of this unit is set to find the optimal genetic operators according to the requirement of the problem (see Figures 4(a), 4(b), and 4(c)). The unit of function optimization and the unit of automatically designing genetic operators are not isolated. Actually, they work together in a closely related way. In the unit of function optimization, the genetic operators for mutation are selected from the unit of automatically designing genetic operators in the process of problem solving. And in the unit of automatically designing genetic operators, individuals are selected from the function optimization population in the unit of function optimization for evaluating the performance of genetic operators.
(a)
(b)
(c)
(d)
The general framework of EA^{2}DGO is given in Algorithm 1.

The unit of function optimization focuses on the finding of global optimal solution, and the framework is given in Algorithm 2. According to the framework, we can see that the unit of function optimization is very similar with standard differential evolution, including the population initialization, crossover manipulation, individual fitness assessment, and individual selection. For mutation operator, an individual , is selected from the population of operator automatic generating unit according to the Roulette Wheel Selection algorithm, and the selected individual will be used as mutation operator in the unit of function optimization.

In the EA^{2}DGO, the mutation operator is a function model. The functionality of a function model is to provide a corresponding output (result) given a certain input parameters (terminals). The input parameters include , , , , and . , , and are three individuals selected from the population of function optimization unit randomly. is the best individual in the current population. Suppose all the function symbols in set are binary operators, the calculation of the result of the function model is expressed as shown in Algorithm 3.

In the unit of function optimization, when the selected chromosome , namely, the genetic operator, can help function optimization unit generate a better candidate individual, the fitness value of chromosome will be increased by one. In other words, the fitness of a chromosome in the unit of automatically designing genetic operators is measured by the times it has made positive effect for the unit of function optimization. With this measurement, in the next generation, the chromosomes with higher fitness will with a higher probability be selected as the genetic operators according to the Roulette Wheel Selection method (see Algorithm 4).

The evolution of chromosome in the unit of automatically designing genetic operators presents challenges in the general framework of the automation of EA. The proposed method is for the individual as well as its offspring in the unit of automatically designing genetic operators, within a certain time (which is a parameter that can be set by the user); we repeatedly select individuals for mutation manipulation from the population of the unit of function optimization then count the times that the fitness of child becomes better than its parent, denoted by and , respectively. A Boolean function better is defined as the fitness for evolution of genetic operators:
If is true, we think that the candidate is better than , and is replaced by .
The offspring is generated by MEP genetic manipulations [8]. While the traditional MEP has both crossover and mutation operators, only the mutation operator is used in EA^{2}DGO for simplicity and saving computing time. Each symbol (terminal, function, and function pointer) in the chromosome may be a target of the mutation operator. When a symbol is changed, a new offspring is generated.
Although the evolution method motioned above can get access to the genetic operators before and after MEP genetic manipulation, an unavoidable disadvantage of this method is that it costs more computing resources compared with traditional optimization algorithms. The reason is that the genetic operators need to be evolved in the process of problem solving. The cost of time is influenced by two parameters times and OMR. Setting proper parameter values will reduce the cost of time and will be researched in the future work.
It is worthwhile to point out that even though there is a mutation operator, respectively, in both the unit of automatically designing genetic operator and the unit of function optimization, their operations are very different. In the unit of automatically designing genetic operator, the operand/chromosome of the mutation operator is literally an expression tree. In contrast, the operand/chromosome of the mutation operator in the unit of function optimization is a vector of real values used as variables for the function optimization.
The general framework of the unit of automatically designing genetic operators is given in Algorithm 5.

5. Experimental Verification
Singleobjective optimization problems are adopted to verify the validity of the EA^{2}DGO algorithm. This means that the genetic operators represented by the above scheme are used to manipulate the individuals in the population in the problem space, and the goal is to find the global optimal solutions of the problems. All the following functions listed in Table 4 are wellknown benchmark functions which have been frequently used in literature [17]. All of these functions used in this paper are minimization problems. In order to verify the effectiveness and efficiency of EA^{2}DGO algorithm, we carried out experiments based on the 23 benchmark functions list in Table 4.
