Research Article  Open Access
Muhammad Sulaiman, Abdellah Salhi, Asfandyar Khan, Shakoor Muhammad, Wali Khan, "On the Theoretical Analysis of the Plant Propagation Algorithms", Mathematical Problems in Engineering, vol. 2018, Article ID 6357935, 8 pages, 2018. https://doi.org/10.1155/2018/6357935
On the Theoretical Analysis of the Plant Propagation Algorithms
Abstract
Plant Propagation Algorithms (PPA) are powerful and flexible solvers for optimisation problems. They are natureinspired heuristics which can be applied to any optimisation/search problem. There is a growing body of research, mainly experimental, on PPA in the literature. Little, however, has been done on the theoretical front. Given the prominence this algorithm is gaining in terms of performance on benchmark problems as well as practical ones, some theoretical insight into its convergence is needed. The current paper is aimed at fulfilling this by providing a sketch for a global convergence analysis.
1. Introduction
The theoretical analysis of stochastic algorithms for global optimisation is not new and can be found in a number of sources such as [1–5]. The majority of the algorithms considered use random search one way or another to find the optimum solution [6–13]. Here, we consider the algorithmic scheme of the Plant Propagation Algorithm for continuous optimisation or PPAC [14] and theoretically investigate its global convergence to the optimum solution. The optimisation problems of concern are continuous and defined in finite dimensional domains.
The basic version of PPA [15] models the propagation of strawberry plants. The scheme uses short runners for exploitation or local search refinement while long runners are used for diversification and exploration of the search space. Since the propagation of strawberries is due to seeds as well as runners, a Seedbased Plant Propagation Algorithm (SbPPA) has also been introduced in [16]. Both PPAC and SbPPA have been shown to be efficient on continuous unconstrained and constrained optimisation problems; statistical convergence analyses of PPAC and SbPPA can be found in [9–11, 14–16].
PPAC [14, 15] consists of two steps:(1)Initialization: a population of parent plants is generated randomly.(2)Propagation: a new population is created from persistent parents (strawberry plants) and their children (new strawberry plants at the end of runners, i.e., a distance away from parent plants).
Let denote the search space such that , where is its dimension. By an iteration of PPAC we mean a new generation of child plants produced by parent plants. These child plants are the result of either short or long runners [14, 16]. This is the basic setup that we consider to sketch a proof of convergence to the global optimum of a given continuous optimisation problem.
The paper is organised as follows. Section 2 presents the terminology used in the analysis of PPAC. Section 3 analyses a population of plants. Section 3.1 describes the convergence analysis of PPAC. Section 4 is the conclusion.
2. Terminology and Notation
We consider single objective minimization problems [17]. such that for all , where the objective function is defined as , denotes the best spot for a plant in the search space. is an dimensional position vector.
The population at the th iteration is denoted by , where is the population size. The coordinates of runners, or more precisely their endpoints, are denoted by , where is the space dimension of the given problem.
2.1. Search Equations and Evaluation of New Plants
Variants of PPA can be found in [14–16]. In this paper we analyse PPAC as Algorithm 1 of [14].
In order to send a short or long runner, is generated [14, 19–21], as in (1a), (1b), and (1c)where is the population size, is the Monte Carlo trial run counter, is the modification probability, and is a randomly generated number for each th entry, . The indices are mutually exclusive; that is, . Another version of PPA called SbPPA [16] which is inspired by propagation via seeds implements the following search equation instead: where is a step drawn from the Lévy distribution [22] and is a random coordinate within the search space. Equations (1a), (1b), (1c), and (2) perturb the current solution, the results of which can be seen in Figures 1(a) and 1(b), respectively.
(a) Perturbations by (1a), (1b), and (1c)
(b) Perturbations by (2)
2.2. A Case Study
Let be the class of runners sent by the th parent plant and stored in . Each runner in class is decomposed into two vectors and , where denotes the vector of indices which are perturbed with respect to the current position of the plant, while represents the vector of corresponding indices of the unperturbed coordinates with respect to the current position of plants. This can be represented as
To clarify this idea, let us take an example [17] of a newly generated runner by the th plant as such that then we can write where
The dot product of these vectors is zero, which shows that they are mutually orthogonal. Mathematically, this can be written as
Let and denote two vector spaces such that
and are subspaces of . This implies that and .
A scalar objective function defined over can be represented as where , , and , , where (11) represents an objective value corresponding to a new runner in position . Similarly, different runners are produced to correspond to different classes and evaluated by the same procedure. This procedure can be generalized for dimensional problems [1–3].
3. Graphical and Theoretical Analysis of a Population of Plants
Algorithm 1 states that the th coordinate of an th parent plant is perturbed with probability and it remains unchanged with probability . Thus there are possible runners to be generated for each th parent plant using (1a), (1b), and (1c), where is the space dimension of the given problem.

Let, at any generation , the random population be represented as , where denotes the population size. To create a runner by using (1a), (1b), and (1c), for next generation , PPA uses a population of parent plants at generation for this purpose. It is not required to know about any other runner in generation . This shows that all runners created at generation are statistically mutually independent. Furthermore, the initial population is random and all parent plants do not depend on each other. Thus, by induction, the runners at any further generations are mutually independent.
From (1a), (1b), and (1c), a runner may be formed by itself or by choosing three different coordinates from current population. In case of using (1a) [19], there are possibilities to send (long or short) runners as in Figure 1. On the other hand, by using (1b) or (1c), vectors are used to send a new runner. Thus, in later cases there are possibilities to send new runners. In (1a)–(1c), different possibilities are of the form where , , and are calculated according to (1a), (1b), and (1c), in which denotes the current generation and is an dimensional random vector within interval . The probability density functions (PDFs) [17, 23] of these new vectors , , , and can be written as , , , , respectively.
The PDFs of new runners created with (1a)–(1c) are given in (15).
Definition 1 (convolution operation ⋆ [24]). Let and be Laplace transformable piecewise continuous functions defined on . The convolution product of these two functions is again a function of defined as
Let be any plant in generation , a plant position in current generation, and the runner produced by the th plant at the generation . Then the joint PDF of the parent plant in the next generation based on the parent and runner , where , is given by
Note that a runner is selected for the next population only if its rank is less than or equal to , the population size. Its objective value is less than the maximum objective value (in case of minimization problem) in the current population. Note also that instead of greedy selection we sort the population and eliminate those plants whose rank is higher than . The model for this selection mechanism can be represented as follows:
3.1. Convergence Analysis of PPAC
For illustration purposes, we have implemented a combined version of (1a), (1b), (1c), and (2). This version of PPAC called HPPASbPPA is a hybridisation of PPA and SbPPA [10]. We have plotted the position of plants in populations through solving the Branin and Matyas test functions (see Figures 2, 3, 4, and 5). It is obvious from Figures 2 and 4 that (1a), (1b), and (1c) have generated short runners which exploit the search space locally. On the other hand, in Figures 3 and 5, (2) has generated a diverse range of solutions which are spread over the whole search space. This equation helps the algorithm escape from local minima and to explore the solution space better and hence the global search qualities of this algorithm. Mathematically this can be shown as follows.
Let denote the search space containing the solution of a given optimisation problem defined as where is the objective function. Then the optimal solution set [25] can be represented as where is the optimum solution. The region of attraction [25] of the solution set is defined as where is a small positive real number and is the current best solution.
In PPAC, each parent plant produces runners (solutions), where . The probability that at generation a subset of the temporary population , containing solutions which are not good enough to be retained in the next generation by the selection model as in (17), is given as where is a small positive real number. Obviously, in all previous generations , some of the solutions died and some succeeded to survive into the next generation. This shows that in previous generations we have some solutions which do not belong to the region of attraction .
At the end of each generation, the temporary population is appended to the main population . Then all individuals are sorted with respect to their objective values. The individuals with higher rank than size of population are omitted. Thus the probability that a generation does not contain an optimum is given below
After sorting the final population at the end of each generation, the probability that the optimum may exist in a subpopulation (population of weak or dead runners/solutions) is less than that of the population . This can be represented asFollowing [25–27], the right hand side term of inequality (24) is zero if the series diverges. The convergence of PPA follows and can be summarised in the theorem below.
Theorem 2. PPA converges to the global optimum with probability 1 if it is left to run for a reasonable amount of time [1–3]; in other words,
Remark 3. Every population includes some solutions which are in set .
Remark 4. The above remark is due to the exploitation characteristic of PPAC.
Remark 5. Remark 3 implies that is always improving or changing its position until the optimum is reached.
Remark 6. converges approximately to , as generation grows.
4. Conclusion
The Plant Propagation Algorithm (PPA) and its variants for continuous optimisation problems are getting notoriety as flexible and powerful solvers. PPA is a heuristic inspired by the way plants and in particular the strawberry plant propagate. It is also referred to as the Strawberry Algorithm. While there is a growing body of experimental and computational works that show its good behaviour and performance against wellestablished algorithms and heuristics, there is very little if any in terms of theoretical investigation. This gap, therefore, needs to be filled. The aim of course, in analysing the convergence of any algorithm (in this case PPAC), is to give confidence to the potential users that the solutions that it returns are of good quality. The convergence analysis put forward in this paper relies on the exploitation and exploration characteristics of the algorithm. Since it does not get stuck in local optima and explores thoroughly the search space it is only a matter of time before the global optimum is discovered. The approach is probabilistic in nature and ascertains that the global optimum will be found with probability 1 provided the algorithm is run reasonably long enough. The argument for this to hold is that at each iteration new and better solutions are generated which means that, in the limit, the global optimum is reached. Questions still remain concerning what is considered a reasonable amount of time. Bounds on the time it will take to converge are being developed and results will be presented in a followup paper.
Appendix
Spring Design Optimisation
The main objective of this problem [28, 29] is to minimize the weight of a tension/compression string, subject to constraints of minimum deflection, shear stress, surge frequency, and limits on outside diameter and on design variables. There are three design variables: the wire diameter , the mean coil diameter , and the number of active coils [30]. The mathematical formulation of this problem, where , is as follows: The simple limits on the design variables are , , and .
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This work has been partially sponsored by ESRC Grant ES/L011859/1.
References
 A. Zhigljavsky and A. Žilinskas, Stochastic Global Optimization, vol. 9, Springer Science & Business Media, 2007.
 C. A. Floudas and P. M. Pardalos, Encyclopedia of Optimization, Springer Science & Business Media, 2008.
 Y. D. Sergeyev, R. G. Strongin, and D. Lera, Introduction to Global Optimization Exploiting SpaceFilling Curves, Springer Science & Business Media, 2013.
 N. Brahimi, A. Salhi, and M. OurbihTari, “Convergence of the plant propagation algorithm for continuous global optimisation,” RAIRO Operations Research, 2018. View at: Google Scholar
 J. He and L. Kang, “On the convergence rates of genetic algorithms,” Theoretical Computer Science, vol. 229, no. 12, pp. 23–39, 1999. View at: Publisher Site  Google Scholar
 W. K. Mashwani, A. Salhi, M. A. Jan, R. A. Khanum, and M. Sulaiman, “Enhanced version of multialgorithm genetically adaptive for multiobjective optimization,” Advanced Computer Science and Applications (IJACSA), vol. 12, no. 6, 2015. View at: Google Scholar
 W. K. Mashwani, A. Salhi, O. Yeniay, M. A. Jan, and R. A. Khanum, “Hybrid adaptive evolutionary algorithm based on decomposition,” Applied Soft Computing, vol. 57, pp. 363–378, 2017. View at: Publisher Site  Google Scholar
 W. K. Mashwani, A. Salhi, M. A. Jan, M. Sulaiman, R. A. Khanum, and A. Algarni, “Evolutionary algorithms based on decomposition and indicator functions: stateoftheart survey,” Advanced Computer Science and Applications (IJACSA), vol. 7, no. 2, 2016. View at: Google Scholar
 M. Sulaiman, A Natureinspired Metaheuristic: The Plant Propagation Algorithm [Ph.D. thesis], University of Essex, 2015.
 M. Sulaiman and A. Salhi, “A hybridisation of runnerbased and seedbased plant propagation algorithms,” Studies in Computational Intelligence, vol. 637, pp. 195–215, 2016. View at: Publisher Site  Google Scholar
 M. Sulaiman, A. Salhi, E. S. Fraga, W. K. Mashwani, and M. M. Rashidi, “A novel plant propagation algorithm: modifications and implementations,” Science International, vol. 28, no. 1, 2015. View at: Google Scholar
 B. I. Selamoglu, A. Salhi, and M. Sulaiman, “Strip algorithms as an efficient way to initialise populationbased metaheuristics,” in Recent Developments in Metaheuristics, pp. 319–331, Springer, 2018. View at: Google Scholar
 M. Sulaiman, A. Ahmad, A. Khan, and S. Muhammad, “Hybridized Symbiotic Organism Search Algorithm for the Optimal Operation of Directional Overcurrent Relays,” Complexity, vol. 2018, Article ID 4605769, 11 pages, 2018. View at: Google Scholar
 M. Sulaiman, A. Salhi, B. I. Selamoglu, and O. B. Kirikchi, “A plant propagation algorithm for constrained engineering optimisation problems,” Mathematical Problems in Engineering, vol. 2014, Article ID 627416, 10 pages, 2014. View at: Publisher Site  Google Scholar
 A. Salhi and E. Fraga, “Natureinspired optimisation approaches and the new plant propagation algorithm,” in Proceedings of the International Conference on Numerical Analysis and Optimization (ICeMATH ’11), pp. K2–1–K2–8, Yogyakarta, Indonesia, 2011. View at: Google Scholar
 M. Sulaiman and A. Salhi, “A seedbased plant propagation algorithm: the feeding station model,” The Scientific World Journal, vol. 2015, Article ID 904364, 16 pages, 2015. View at: Publisher Site  Google Scholar
 S. Ghosh, S. Das, A. V. Vasilakos, and K. Suresh, “On convergence of differential evolution over a class of continuous functions with unique global optimum,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 1, pp. 107–124, 2012. View at: Publisher Site  Google Scholar
 M. Sulaiman and A. Salhi, “The 5th International Conference on Metaheuristics and Nature Inspired Computing,” in Proceedings of the 5th International Conference on Metaheuristics and Nature Inspired Computing, Morocco, UAE, October 2014, http://meta2014.sciencesconf.org/40158. View at: Google Scholar
 R. Hooke and R. Jeeves, “Direct search solution of numerical and statistical problems,” Journal of the ACM, vol. 8, pp. 212–229, 1961. View at: Publisher Site  Google Scholar
 R. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Oppositionbased differential evolution,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 64–79, 2008. View at: Publisher Site  Google Scholar
 B. Akay and D. Karaboga, “A modified Artificial Bee Colony algorithm for realparameter optimization,” Information Sciences, vol. 192, pp. 120–142, 2012. View at: Publisher Site  Google Scholar
 X. S. Yang, NatureInspired Metaheuristic Algorithms, Luniver Press, 2011.
 M. Taboga, Lectures on Probability Theory and Mathematical Statistics, CreateSpace Independent Pub., 2012.
 R. N. Bracewell and R. Bracewell, The Fourier Transform and Its Applications, vol. 31999, McGrawHill, New York, NY, USA, 1986.
 Z. Hu, S. Xiong, Q. Su, and X. Zhang, “Sufficient conditions for global convergence of differential evolution algorithm,” Journal of Applied Mathematics, vol. 2013, Article ID 193196, 2013. View at: Publisher Site  Google Scholar
 C. Chen, F. Jin, X. Zhu, and G. Ouyan, Mathematics Analysis, Press: Higher Education Press, 2000.
 K. Knopp, Theory and Application of Infinite Series, Courier Dover Publications, 2013.
 J. Arora, Introduction to Optimum Design, Academic Press, 2004.
 A. D. Belegundu and J. S. Arora, “A study of mathematical programming methods for structural optimization. I. Theory,” International Journal for Numerical Methods in Engineering, vol. 21, no. 9, pp. 1583–1599, 1985. View at: Publisher Site  Google Scholar  MathSciNet
 L. C. Cagnina, S. C. Esquivel, and C. A. C. Coello, “Solving engineering optimization problems with the simple constrained particle swarm optimizer,” Informatica (Slovenia), vol. 32, pp. 319–326, 2008. View at: Google Scholar
Copyright
Copyright © 2018 Muhammad Sulaiman et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.