Abstract
Most natureinspired algorithms simulate intelligent behaviors of animals and insects that can move spontaneously and independently. The survival wisdom of plants, as another species of biology, has been neglected to some extent even though they have evolved for a longer period of time. This paper presents a new plantinspired algorithm which is called root growth optimizer (RGO). RGO simulates the iterative growth behaviors of plant roots to optimize continuous space search. In growing process, main roots and lateral roots, classified by fitness values, implement different strategies. Main roots carry out exploitation tasks by selfsimilar propagation in relatively nutrientrich areas, while lateral roots explore other places to seek for better chance. Inhibition mechanism of plant hormones is applied to main roots in case of explosive propagation in some local optimal areas. Once resources in a location are exhausted, roots would shrink away from infertile conditions to preserve their activity. In order to validate optimization effect of the algorithm, twelve benchmark functions, including eight classic functions and four CEC2005 test functions, are tested in the experiments. We compared RGO with other existing evolutionary algorithms including artificial bee colony, particle swarm optimizer, and differential evolution algorithm. The experimental results show that RGO outperforms other algorithms on most benchmark functions.
1. Introduction
In recent years, many heuristic algorithms inspired by collective intelligent behaviors of insects and animals were proposed to solve complex optimization problems. For example, ant colony optimizer (ACO) simulates foraging behaviors of ants [1]. Particle swarm optimizer (PSO) simulates swarm behaviors of birds and fish [2, 3]. Bacterial colony optimizer (BCO) [4] and bacterial colony foraging optimizer (BCFO) [5] simulate typical behaviors of bacteria during their lifecycle. Artificial bee colony (ABC) algorithm simulates foraging behaviors of a swarm of bees [6]. Compared to traditional mathematical methods, these heuristic algorithms have no central control, and performance of the population will not be affected by individual failures. Therefore, they are more flexible and robust when dealing with complex, multimodal, and dynamic problems.
In the natural world, while most of animals develop toward a predetermined body plan, plants demonstrate iterative growth and constantly produce new organs and structures by actively dividing meristems [7] to adapt to the differing environments. As another species of biology, however, plant has attracted little attention in the field of bioinspired computing [8] even though it has evolved for a longer period of time. Compared with animal, plant cannot move but grow. There is neither a brain nor neurons in its body. As a result, it seems insensitive to external information, dull to take actions, and far away from intelligence. In some biologists’ opinions, however, plant can also be regarded as “intelligent organisms” [9, 10]. During the growing process, plant shows considerable plasticity in its morphology and physiology in response to varieties of environments [11]. For example, their roots can properly cope with the prevailing conditions in soil, such as avoiding obstacles and exploring nutrientrich patches or water zones by its hydrotropism, chemotropism, gravitropism, and so on. The iterative propagation mode makes them extremely flexible and adaptive in detecting resources and concentrating their efforts on areas that are the most profitable [12]. Consequently, roots are always able to find the best position with naturedesigned growth strategies. This is a perfect heuristic for designing optimization algorithms. Thus, inspired by the growth behaviors of plant roots, this paper presents a new algorithm named root growth optimizer.
The remainder of this paper is organized as follows. Section 2 talks about some topics about root growth. Section 3 models the root growing process. Section 4 presents the root growth optimization algorithm step by step. Experiments and results are given in Section 5. Section 6 discusses some unique characteristics of RGO, and Section 7 outlines the conclusions.
2. Some Topics about Root Growth
2.1. SelfSimilar Propagation
The growth and propagation of root are considered very important for plant to adapt to soil environments since it is the only organ to obtain water and inorganic nutrients below the surface of the soil [13]. In a root system, its architecture is well known to be a major determinant of root functions in acquiring soil resources [14, 15]. Because most root systems have the characteristics of selfsimilarity and are considered as approximate fractal objects over a finite range of scales [16] (as shown in Figure 1), fractal geometry has been widely used to assess the architecture and distribution of root systems in soil [17, 18].
There is a close correlation between the architecture and propagation strategies as botanists have discovered. During the growing process, root can perceive their external physical environments and implements different strategies. If there are enough resources, it will produce many lateral roots at the same time of elongating forward. Otherwise, few lateral roots are produced. Over time, the similar propagation occurs at different positions in variant scales. As a result, the whole root system will cover the most profitable area with selfsimilar architecture.
2.2. Inhibition of Plant Pheromones
The development process and architecture of a root system are also determined by internal interactive action of all kinds of plant hormones. Among them, auxin and cytokinin are well known to be the two crucial hormonal signals, and root growth is mainly regulated by their crosstalk [19, 20]. Both of them can be generated by meristematic cells. As far as we know, auxin is a key factor for elongation of cells which is mostly generated on the shoot and transported to root tips [21], while cytokinin works locally to enhance the rate of cell division which is mostly generated in roots [22]. Only with certain ratio of auxin to cytokinin that root grows and develops into regular architecture [20].
Biologists have discovered that there is a mutually inhibitory interaction between auxin and cytokinin. If one of them increases out of balance, the other will promote the signaling of inhibitors [7]. For instance, when roots grow rapidly in a nutrientrich area, abundant cytokinin will be synthesized in meristematic cells of newborn roots. Nevertheless, there is no proportional auxin yet provided in time by polar transport. Thus, cytokinin signaling will promote the expression of auxin signaling inhibitors so as to form a negative feedback loop until the root growth rate returns to a balancing level. In summary, roots will never propagate explosively in a local area even if all environmental conditions meet their needs.
2.3. Shrinkage
The environment around roots is an open and dynamic system varying with the time. In a local optimal area, water and inorganic nutrients may diffuse gradually because of the gradient effect. Meanwhile, roots themselves keep consuming resources all the time. Therefore, the soil will become more and more infertile unless external resources are added in time.
When the environments change, roots must adjust their behaviors to adapt to new conditions. In fact, most of the roots are able to respond effectively to the variational conditions even when they meet unexpectedly [23]. Typically, they will stop growing in the area and shrink away from poor conditions, which make the whole root architecture a robust system with high diversity in searching for water and nutrients as much as possible.
3. Artificial Root Growth
3.1. Basic Concepts
In the artificial model, an objective function is treated as the growing environments of plant roots, and the initial roots are considered as a homogeneous biomass [24]. Each root apex stands for a feasible solution of the problem. All roots try to adjust their growing directions and propagation strategies in order to search for the optimal growing conditions, which feed back to improve root growth further.
In growing process, all the root apices can select their growth strategies composed of the following three basic actions.(1)Each root apex may elongate forward (or sideways) in the search space.(2)Each root apex may produce new root apices (namely, daughter apices).(3)Each root apex may stop functioning as above and become an ordinary piece of root mass.
In a word, a root apex may regrow itself, produce new roots, or stop growing for some reason.
According to fitness values, the whole root mass is divided into three groups. The group with the best fitness values is called main roots. The group with the worst fitness values is called aging roots. The rest of root mass is called lateral roots. In the three groups, except for aging roots that will stop growing in the next generation, main roots and lateral roots implement different growth strategies.
3.2. The Growth Strategy of Main Roots: Monopodial Branching
According to monopodial branching strategy, a main root itself regrows to form an axis firstly, and then branching roots appear in the lateral position. As a result, the growth strategy contains three operators as follows.
(1) Regrowing. This operator means that a root apex regrows towards a local best position where there are better water and nutrient conditions. The operator is formulated as the following expression:where is the original position of the th root apex and is the new position. is the local learning constant. is a random number with uniform distribution in . is the local best position in the generation.
(2) Branching. This operator means that a root apex produces some new apices around it. The number of newborn root apices is calculated as follows:where and are the best and the worst fitness values in the generation, respectively. is the fitness value of the original root apex. and are the maximal branching number and the minimal branching number which are preset.
The positions of new root apices surround the original root apex with Gauss distribution . The standard deviation is calculated as follows:where and are the maximal iteration number and current iteration number, respectively. is the initial standard deviation depending on the value of searching range and is the final standard deviation determined by expected accuracy standard in the program.
In formula (3) we can see that when the value of increases during the iterative process, will become smaller and smaller. In this way, the similar architecture will appear in variant scales. As a whole, all root apices will form approximate selfsimilar architecture.
(3) Inhibition Mechanism of Plant Hormones. Because the newborn root apices may be classified into the main root group with high probability in the next generation if the area is really nutrientrich enough and all main roots will elongate and propagate again, the number of roots in this area may increase explosively in several generations, which we called “root number explosion.”
Root number explosion is absolutely harmful to the adaptability of a root system. From some points of optimization view, it will make the algorithm plunge into local optima and lose essential diversities since the total number of root apices is rigidly limited. In fact, the phenomenon is rarely seen in natural plant roots because plant hormones play an important role in inhibiting explosive propagation, which has been described in Section 2.2.
To simulate the inhibition mechanism of plant hormones in the model, we will calculate the local standard deviation of new apices produced by a main root and then get rid of some apices according to the calculating results by greedy principle. The operator is implemented as follows:where is the number of root apices which should be abandoned and is a control parameter.
From formula (4) we can know that the smaller is, the more root apices will be removed in the next generation. On the one hand, rapid local increase of roots is controlled in this way so that root number explosion can be avoided. On the other hand, essential diversity can be kept to prevent the algorithm from prematurity.
3.3. The Growth Strategy of Lateral Roots: Sympodial Branching
In sympodial branching mode, the root apex produces a new branching apex at the lateral position instead of regrowing along the original direction, and the new branching apex grows into an axis by replacing the original one. The new branching apex may locate at a random position around the original root with a random angle . This strategy is formulated as follows: where is a random number with uniform distribution in . is calculated as follows:where is a random vector.
3.4. Shrinkage
Both monopodial branching roots and sympodial branching roots consume local resources to keep growing all the time. The decrease of local resources may lead to loss of activity of roots in nondynamic environments. According to the model, if a root axis has been active for a long time and still fails to join the elite team, it will cease to function and be classified into the aging group in the next generation. In another word, the root system will shrink away from this area after the local resources are exhausted.
4. Root Growth Optimization Algorithm
Root growth mechanism presents a wonderful inspiration for designing a new optimization algorithm. It can be seen that the artificial root growth model has described the original appearance of natural plant roots with few assumptions. Corresponding algorithm is consequently given based on the above model.
The pseudocode of RGO is listed in Pseudocode 1.

5. Experiments and Results
In order to test the performance of RGO, PSO, ABC algorithm, and DE algorithm are employed for comparison as they were widely used in recent years [25–30]. According to the state of the art, eventually we select CCPSO2 [31] and MDE_pBX [32] to replace the basic PSO and DE since they have been reported to perform much better than the original versions. In the experiments, twelve test functions, including eight classical functions (Table 1) and four CEC2005 [33] test functions (Table 2), are used to test its efficiency.
5.1. Experiment Sets and Benchmark Functions
The eight classic benchmark functions are widely adopted by other researchers to test their algorithms in many works [25–30]. Among these functions, sphere is a unimodal function with separable variables which is easy to solve. Schwefel 1.2, Schwefel 2.22, and Rosenbrock are unimodal functions with nonseparable variables. Rosenbrock function has a narrow valley sloping gently from local optima to the global optimum and thus can be treated as a multimodal function.
Rastrigin and Schwefel are multimodal functions with separable variables. Ackley and Griewank are multimodal functions with nonseparable variables. They all have a large number of local optima to make it difficult to reach the global optimum.
Functions are selected from CEC2005 test bed. is shifted Schwefel 1.2 with noise. are shifted rotated Rastrigin, shifted rotated Ackley on bounds, and shifted rotated Griewank without bounds. In a shifted function, the global optimum is shifted with a vector . In a rotated function, a rotated variable , which produced by the original variable left multiplying an orthogonal matrix , is used to calculate the fitness instead of . The orthogonal matrix does not change the shape of the function. However, when one dimension of vector is changed, all dimensions of vector will be affected. Thus, the rotated function differs totally from the original function in the view of searching.
In this paper, all functions use their standard ranges and variable data. The experiments compare the performance using all accuracies of algorithms for a fixed number of function evaluations. The max evaluation count is 10,000. Experiments have been carried out using Matlab 7.0 on a standard 2.5 GHZ desktop computer. All parameters in CCPSO2, ABC, and MDE_pBX are set as their original values. The population size of four algorithms is 50. In order to do meaningful statistical analysis, each algorithm runs for 20 times, and the mean value and standard deviation value are taken as final results. In RGO, the number of root apices in main root group is thirty percent of the selected root apices in each generation. and are set as 3.0 and 1.0, respectively. and are all set as 1.0. All the benchmark functions are listed in Tables 1 and 2.
5.2. Experiment Results and Analysis
The mean fitness values and standard deviation values obtained by the four algorithms with 2, 10, 30, 50, and 100 dimensions are listed in Tables 3, 4, 5, 6, and 7. Best values obtained on each function are marked as bold.
As can be seen in Table 3, with dimension of 2, ABC performs better than others on functions , , and . MDE_pBX shows the best performance on functions and . CCPSO2 shows the best performance on functions and . Though RGO only gets the best results on functions , , , and , it gets satisfactory accuracy on other functions. All algorithms get the best results on function . From Table 4, RGO performs much better than ABC, CCPSO2, and MDE_pBX on most functions except , , and . From Tables 5, 6, and 7, we can see that most of the best results are obtained by RGO. It outperforms other algorithms obviously in terms of accuracy on high dimension functions.
The convergence results of classical benchmark functions with 30 dimensions are present in Figure 2, which prove that the RGO can converge much faster than other algorithms to the best results on most of functions except Schwefel function. From the figures, it can be seen that the curve of RGO often goes down with certain stability while other algorithms are suffering from stagnation in the middle of searching process, which is shown more apparently on CEC2005 test functions in Figure 3. Additionally, convergence characteristics show that RGO usually converges quickly to an acceptable solution with fewer generations, which can be seen typically in (a), (b), (c), (d), and (g) of Figure 2. Figure 3 also shows the features. It is a proof that heuristic information of root growth in the algorithm always works well so that appropriate searching directions can be guided by main roots. Only the shifted rotated Griewank function without bounds (as shown in Figure 3(d)) makes an exception. Obviously, the lack of bounds in this function makes challenge to the searching capability of RGO. Therefore, the strategies of searching in variantscale area, for example, large scale area, should be reconsidered in the algorithm.
(a) Sphere function
(b) Schwefel 1.2 function
(c) Schwefel 2.22 function
(d) Rosenbrock function
(e) Rastrigin function
(f) Schwefel function
(g) Ackley function
(h) Griewank function
(a) Shifted Schwefel 1.2 with noise
(b) Shifted rotated Rastrigin
(c) Shifted rotated Ackley on bounds
(d) Shifted rotated Griewank without bounds
From comparison shown in Tables 3–7, we can see that RGO is a very promising algorithm. It has manifested quite strong optimizing ability on test functions. When dimension of functions increases, RGO shows more obvious advantage than other evolutionary algorithms.
6. Discussions
6.1. Local Learning
In RGO, formula (1) is similar in form to the iteration formula of PSO, which can be expressed as follows:where and are local learning factor and social learning factor, respectively. Compared with formula (7), in RGO, the elongation of main root apex is determined only by the local best position and local learning factor, which means that social learning factor and global best fitness value have no influence on behaviors of a main root apex. The reason contains two points. Firstly, as far as natural plant is concerned, there is no biological proof that a root apex can get global information which may be on another apex far away from it. To the best of our knowledge, it can only get local information by hydrotropism, chemotropism, gravitropism, and so forth. Secondly, from the perspective of optimization, actual global optimum does not always locate near the temporary global optimum found so far in multimodal environments. There is even no direct evidence that one can find the global optimum with higher probability while running towards the temporary global optimum.
In comparison, local learning is necessary because a group of individuals may be in pursuit of the same actual local optimum in a unimodal area, where running towards a better solution is reasonably beneficial to one’s own fitness improvement.
As a result, social learning does not work as effectively as people expect in the presence of elite strategy. Since roots implement “the fittest propagate” principle in RGO, temporary global optimum is not worthy for all root apices to follow.
6.2. SelfSimilarity
In early years, biologists have already found that root systems have selfsimilarity and were considered as approximate fractal objects over a finite range of scales [16]. Until now, the architectural characteristic of root systems has been drawing much researchers’ attention [18, 34].
In RGO, it can be seen from formulas (2) and (3) that when a main root apex has a good fitness value, it will be vigorous for propagation. According to formula (3), along with main roots elongating into the soil, the positions of their newborn daughter root apices comply with the same distribution law at different time points, except that distribution range becomes smaller and smaller. In the meanwhile, newborn roots may become main roots in the next generation and propagate in the same way. With this pattern, approximate fractal architecture with selfsimilarity characteristics will be shaped.
As far as optimization is concerned, it can be confirmed that the selfsimilar propagation is profitable for roots to exploit resourcerich areas rapidly. As a novel search technique, the correlation between selfsimilar propagation and searching in multimodal continuous space remains to be an interesting problem which will be investigated in the near future work.
7. Conclusions
Based on adaptive growth behaviors of plant roots, root growth optimization algorithm is present in this paper. Twelve benchmark functions, containing eight classical functions and four CEC2005 test functions, were used to test its performance. The results were compared with ABC, CCPSO2, and MDE_pBX. Comparing results show that the performance of RGO outperforms other algorithms on most test functions. RGO has also demonstrated faster convergence speed with acceptable solutions, which helps reduce computing cost, especially time cost. It is very meaningful in dynamic environments with limited computing resources. Moreover, RGO is potentially more powerful than other algorithms on functions with high dimensions.
A further extension of the current RGO may result in even more effective optimizing algorithms for solving complex multimodal problems. Future research efforts will be focused on improvements of the algorithm, theoretical analysis on selfsimilar propagation and optimization, and applications to practical engineering problems.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publishing of this paper.
Acknowledgments
This research is partially supported by the open fund of Key Laboratory of Networked Control System, Chinese Academy of Sciences (Grant no. WLHKZ2014004), the National Natural Science Foundation of China (Grants nos. 61202341, 61202495, 71271140, and 71001072), The Hong Kong Scholars Program 2012 (Grant no. GYZ24), China Postdoctoral Science Foundation (Grants nos. 20100480705 and 2012T50584), and Shenzhen Science and Technology Plan Project (Grant CXZZ20140418182638764).