#### Abstract

Many real-world optimization problems are actually of dynamic nature. These problems change over time in terms of the objective function, decision variables, constraints, and so forth. Therefore, it is very important to study the performance of a metaheuristic algorithm in dynamic environments to assess the robustness of the algorithm to deal with real-word problems. In addition, it is important to adapt the existing metaheuristic algorithms to perform well in dynamic environments. This paper investigates a recently proposed version of Bees Algorithm, which is called Patch-Levy-based Bees Algorithm (PLBA), on solving dynamic problems, and adapts it to deal with such problems. The performance of the PLBA is compared with other BA versions and other state-of-the-art algorithms on a set of dynamic multimodal benchmark problems of different degrees of difficulties. The results of the experiments show that PLBA achieves better results than the other BA variants. The obtained results also indicate that PLBA significantly outperforms some of the other state-of-the-art algorithms and is competitive with others.

#### 1. Introduction

Many population-based metaheuristic algorithms have been used to solve stationary optimization problems where the fitness landscape is fixed during the course of optimization. However, most of the real-world optimization problems may face some uncertainties that may come from sources such as dynamic change of the attributes or the goal of the optimization problem [1]. For instance, in the Travelling Salesman Problem (TSP), if traffic jams are faced on some roads, the time between cities associated with those roads is no longer fixed and is increased [2]. The job scheduling problem can also face some uncertainties such as changes of due dates, orders, arrival of new jobs, and faults in the work of some machines [3, 4]. Thus, proposing metaheuristic algorithms that can deal with such problems is very important. Additionally, the existing metaheuristic algorithms should be adapted to deal with such dynamic optimization problems (DOPs).

Therefore, in recent years, there has been growing concern from researchers in the optimization community in the dynamic optimization domain. As a result, in the literature, many metaheuristic algorithms have been applied or modified to handle the DOPs. One of the first metaheuristic algorithms used to explore the DOPs was the genetic algorithm (GA) [4]. Ursem [5] proposed a multinational GA to deal with the DOPs by evolving the algorithm parameters during the search process. Grefenstette [6] adopted a self-adaptive GA for dynamic environments. The proposed algorithm can select different mutation or crossover operators based on the agent idea to control the selection process. Yang [7] proposed a memory-based GA to solve DOPs. Simões and Costa [8] investigated a GA based on an immune system for DOPs. In addition, the differential evolution (DE) has been applied to solve the DOPs as can be found in Mendes and Mohais [9], Brest et al. [10], and Mukherjee et al. [11]. Mendes and Mohais [9] introduced a multipopulation DE for the DOPs. Brest et al. [10] proposed a self-adaptive multipopulation DE integrated with aging mechanism for DOPs. Mukherjee et al. [11] presented a variant of DE that modified the genetic operators of the DE to suit the dynamic optimization. The proposed algorithm utilized locality-induced mutation and crossover operators that acted as a retention strategy by employing the previously stored information to adaptively deal with the DOPs.

Swam intelligence-based metaheuristics have also been investigated in the dynamic environments. Eberhart and Shi [12] investigated particle swarm optimization (PSO) in tracking a single spatially changing peak. Hu and Eberhart [13] presented an adaptive PSO to automatically track a wide variety of changes in a dynamic system. Parrott and Li [14] investigated PSO with multiple parallel subpopulations to track multiple peaks simultaneously. Blackwell and Branke [15, 16] proposed some improvements on PSO to work well on DOPs by building interacting multiswarms. Wang et al. [17] introduced a memory scheme to PSO that is triggered whenever the exploration of the population results in a peak to deal with dynamic environments. Recently, Kordestani et al. [18] presented an oscillating triangular inertia weight in PSO, which is a time-varying inertia weight parameter, and investigated its performance on tracking optima in dynamic environments. The Ant Colony Optimization (ACO) algorithm has also been studied and adapted to work in dynamic environments. Eyckelhof and Snoek [2] presented a new ant system technique to a dynamic TSP problem. Tfaili et al. [19] used a multiagent ant colony algorithm, called Dynamic Hybrid Continuous Interacting Ant Colony (DHCIAC track), which hybridized between ant colony and dynamic simplex methods to optimize a set of dynamic test functions.

The Artificial Bee Colony (ABC) has also been applied to solve the DOPs. Raziuddin et al. [20] proposed a variant of ABC for DOPs based on a differential update strategy and an external archive or memory. In this variant, the good solutions through the generations are retained in a memory and a number of these solutions are randomly selected for the differential update strategy. In this update strategy, the weighted difference between the bee and its neighbor is added to the elite bee. Jiang [21] proposed a modified version of ABC for solving DOPs. The proposed ABC divides the population into two types: the sensitive bees and the optimizing bees. The sensitive bees work as monitors to detect the environmental changes and the optimizing bees act as respondents to changes by searching the changing optimal solution. Kojima et al. [22] proposed an improved version over Basic ABC for solving DOPs with some modifications to the procedures of the Basic ABC. Nakano et al. [23] have further modified the improved ABC in [22] for DOPs by incorporating a detection scheme and a memory scheme of the best solutions. Nseef et al. [24] presented an adaptive multipopulation ABC for dynamic optimization. The number of subpopulations in the proposed ABC changes over time based on the environmental changes strength to adapt to these changes.

In addition, an immune-based algorithm that is called Artificial Immune Network for Optimization (opt-aiNet) [25] was proposed for static optimization and then extended to deal with DOPs [1, 26]. This extended dynamic version is called Artificial Immune Network for Dynamic Optimization (dopt-aiNet). Recently, Turky and Abdullah [27] proposed a multipopulation harmony search with an external archive for dynamic optimization. Other swarm intelligence-based algorithms have been also adapted to solve DOPs such as the Cuckoo Search (CS) [28] and the artificial fish swarm algorithm (AFSA) [29]. A comprehensive survey on swarm intelligence-based algorithms for dynamic optimization can be found in [30].

Over the years, various dynamic optimization problems have been proposed to investigate the population-based metaheuristic approaches in the dynamic environments. Among these problems is the Moving peaks benchmark by Branke [31] that was employed in many works such as those studies done by Blackwell and Branke [15, 16], Turky and Abdullah [27], and Kordestani et al. [18] and the DF1 generator developed by Morrison and De Jong [32] that was used for testing metaheuristics in dynamic environments in works like that performed by Tfaili et al. [19]. However, there was no unified method for constructing dynamic optimization problems across the real space, combinatorial space, and binary space. Thus, recently, a Generalized Dynamic Benchmark Generator (GDBG) has been proposed for all the three spaces [33]. Using this generator, six dynamic benchmark problems were generated in the real solution space [34, 35]. These benchmarks can be included under two benchmark instances: Dynamic Composition Benchmark Generator (DCBG) and Dynamic Rotation Peak Benchmark Generator (DRPBG) [34, 35].

Then, many researchers were motivated to employ these new benchmark problems to study the performance of various metaheuristic algorithms in the dynamic environments. De França and Von Zuben [1] used these challenging benchmarks to test the performance of an adapted variant of the dopt-aiNet algorithm [26] in changing environments. Yu and Suganthan [36] adopted an evolutionary programming (EP) version based on a set of explicit memories to deal with these challenging dynamic benchmark problems. Korošec and Šilc [37] applied the differential ant-stigmergy algorithm (DASA) on the newly proposed dynamic benchmark problems. Li and Yang [38] proposed a variant of PSO (CPSO) that employed a hierarchical clustering method and a fast local search to address the dynamic environments constructed by the benchmarks. Mukherjee et al. [11] presented a modified variant of DE (MDE-LiGO) that utilized locality-induced genetic operators and tested it on the same benchmarks. Good surveys regarding the problems and metaheuristic algorithms in the dynamic environments were conducted by Mori and Kita [4], Blackwell et al. [3], Branke [39], Cruz et al. [40], Nguyen et al. [41], and Mavrovouniotis et al. [30].

A recently developed swarm intelligence algorithm is the Bees Algorithm (BA). The Bees Algorithm (BA) is a bee swarm-based algorithm proposed by Pham et al. [42] and inspired by the foraging behavior of swarm of honeybees. A few works have been done to investigate the performance of BA on DOPs [43, 44]. These problems were optimization benchmarks in chemical engineering. Recently, a modified version of Bees Algorithm, which is called Patch-Levy-based Bees Algorithm (PLBA), has been proposed by Hussein et al. [45, 46]. The PLBA has been adopted to solve challenging static real-parameter optimization problems [45]. The experimental results showed that the PLBA outperformed other BA variants and some of the other state-of the-art algorithms on a set of challenging static real-parameter optimization problems. Additionally, PLBA was competitive with other state-of-the-art algorithms. The PLBA has also been proposed for multilevel image thresholding [46]. The experiments indicated that the PLBA significantly outperformed Basic BA and other sate-of-the-art algorithms. Encouraged by these promising results of the PLBA in solving these types of static problems, we validate the performance of PLBA and other variants of BA on the set of recently proposed dynamic multimodal benchmarks [34, 35] mentioned above.

The dynamic version of an optimization algorithm should be able to dynamically adapt to the environmental changes [24]. A dynamic optimization problem requires an optimization algorithm that is able not only to find the global optimum but also to detect the environmental change and track the changing global optimal solution [36, 38]. Some of the following features should characterize an optimization algorithm to deal with dynamic environments [3, 30, 39]: diversity creation, diversity maintenance, memory of old solutions, and multipopulation feature. The memory of old solutions can be advantageous to act as the initial population for a new change in the environment, where a new change is considered as the arrival of a new problem [36]. The memory can be beneficial especially in the case that the new optimal solution is not far from its previous positions [36]. The multipopulation strategy has been applied to track many peaks in the search space [24, 36, 38]. The memory and multipopulation are widely proposed and integrated with the optimization algorithms to enhance the diversity of the populations. Thus, it can be stated that the most important task in dealing with dynamic optimization problems is maintaining the diversity of the solutions [36, 38] to ensure that the population are not stuck into a single optimum where it cannot make further progress [36].

Several strategies have been adopted to maintain diversity in dynamic optimization problems such as hypermutation in GAs [11], the random immigrant scheme [11], prediction scheme and the memory-based methods, which are considered a special case of it [24], self-adaptive strategy [24], and multipopulation strategy [11, 24]. The hypermutation strategy maintains the diversity by increasing the mutation rate for some generations after the dynamic environmental change [11]. The random immigrant strategy maintains the diversity by replacing a part of the population with randomly generated solutions in each generation [11]. The hypermutation and random immigrant have been used for GA and evolutionary algorithms. Predictions schemes make the algorithm able to learn patterns from the previous search history and predict the upcoming changes. Examples of prediction-based techniques can be found in [47–49]. Many memory-based methods can be found in the literature such as Branke [31], Eggermont and Lenaerts [50], Branke [51], Yu and Suganthan [36], Yang [52], Daneshyari and Yen [53], and Wang et al. [17]. Self-adaptive schemes maintain the diversity by adaptively improving the search behavior of the algorithm, thus reducing the need for the manual tuning of the algorithm parameters [24]. Examples of methods based on this scheme are available in [5, 6, 18]. The multipopulation strategies have been widely applied to maintain the diversity of the population in the dynamic optimization problems. The methods based on the multipopulation strategy divide the population into subpopulations and distribute them over the search space to track multiple peaks in the search space [11, 24]. Many multipopulation methods can be found in the literature such as Branke et al. [54], Blackwell and Branke [15], Li and Yang [38], Li et al. [55], Turky and Abdullah [27], and Nseef et al. [24]. Li et al. [56] conducted a comprehensive experimental analysis on the performance of multipopulation methods and investigated the difficulties related to the multipopulation strategy. One of the challenging issues to apply the multipopulation strategy to DOPs is the identification of the suitable number of subpopulations [30]. Thus, researchers have been motivated to propose adaptive multipopulation algorithms to deal with DOPs.

Blackwell [57] proposed a self-adapting multiswarm optimizer based on a simple rule for generating and removing subswarms, which helps in the dynamic change of the number of subswarms. This optimizer was one of the first adaptive methods concerning the number of populations [30]. Li et al. [55] proposed an adaptive multiswarm optimizer (AMSO), which employs a single-linkage hierarchical clustering method to generate the proper number of subpopulations. In the proposed optimizer, the diversity of the population is maintained based on the differences of the number of subpopulations between two successive diversity increasing points. Li et al. [56] proposed an adaptive multipopulation framework for solving the DOPs, in which the number of populations is adjusted based on a database storing historical information of the changes in the algorithm behavior. Ali et al. [58] proposed an adaptive multipopulation version of DE, in which each population has its own life cycle and size. The life cycle and the size of each population in each generation are controlled by a success-based scheme, which adaptively changes them based on the previous success of the population. Yazdani et al. [29] proposed a multiswarm AFSA, where the swarms are categorized into parent and child swarms. The parent swarms are employed to find uncovered peaks and the child swarms are responsible for covering and tracking the located peaks. Whenever a parent swarm converges to a new peak, it generates a child swarm to cover that peak and track it. This parent-child mechanism was proposed to address the challenging issue of the unknown number of peaks.

The diversity in BA-based algorithms is created and maintained by keeping a large portion of the population () scouting for new promising solutions. In the PLBA, features such as the patch concept and Levy flights help the PLBA to track more than one peak in the sense of maximization problems. The patch environment in the initialization part helps in spreading the solutions out along the search space. The Levy flights in the initialization and global search parts with a suitable search size enhance PLBA to maintain diversity because of the rare long jumps of these flights. In addition, at the same time, the greedy local search based on Levy flight with a suitable small search size reduces the length of the long steps of Levy flight that work together with the frequent short steps on exploiting the new regions and thus finding the new optimum.

The key objectives of this paper are as follows:(1)To validate the performance of the recently proposed PLBA and other variants of BA on the set of recently proposed dynamic multimodal benchmarks [34, 35] mentioned above and compare among them.(2)To apply the PLBA to challenging DOPs and compare it with other state-of-the-art algorithms.(3)To show the advantage of modelling additional natural aspects in nature-inspired metaheuristics such as BA, which are the patch concept and Levy motion.

The remainder of this paper is organized as follows. Section 2 provides a brief description of the Basic BA, Shrinking-based BA, Patch-Levy-based Bees Algorithm (PLBA), and other state-of-the-art algorithms. Section 3 describes the adaptation of the PLBA algorithm and other BA versions to deal with dynamic environments. Section 4 presents the results of performance evaluations and experiments obtained for the PLBA and compares them with those obtained using other BA variants and other state-of-the-art algorithms. Finally, Section 5 concludes this paper.

#### 2. Brief Description of the Compared Algorithms

##### 2.1. Basic BA

The Bees Algorithm (BA) is a bee-based optimization algorithm inspired by the foraging behavior of a swarm of honeybees. Basic BA performs a kind of exploitative local search combined with an exploratory global search [59]. Both search modes implement a uniform random search. In the global search, the scout bees are uniformly distributed at random to different areas of the search space to scout for potential solutions. In the local search, follower bees are recruited to exploit patches that scout bees have found to be more promising. Two processes are required to conduct the local search, namely, the selection and recruitment processes. In the selection process, the patches found to be more promising are chosen, whereas in the recruitment operation, follower bees are recruited for the promising patches, while more bees are recruited for the best patches out of those selected.

##### 2.2. Shrinking-Based BA

Shrinking-based BA is an improved version over Basic BA, which includes the neighborhood shrinking step as an additional step over Basic BA [59]. In this paper, the shrinking procedure is implemented to all of the patches simultaneously, using a global memory at each iteration of the BA, after the recruitment stage [59].

##### 2.3. Patch-Levy-Based Bees Algorithm (PLBA)

PLBA is an enhanced variant of BA, in which the population initialization, local search, and global search are performed according to the patch concept and Levy motion [45, 46]. In the initialization part of PLBA, the search space that represents the environment is divided into clear segments, which represent patches, such that the food sources are clearly distributed in patches. The centres of the patches are used to represent areas inside the hive. Then, the bees are distributed randomly from these hive areas according to the Levy flight distribution, which is believed to approximate the natural flight patterns of bees. In the global search, the scout bees are distributed from the hive areas, which are chosen to be the same areas of the hive from which they are initially distributed. Then, the bees start to scout according to the Levy flight as in the initial step. The local search inside a patch is performed based on Levy looping flights. The patch and Levy concepts in the PLBA algorithm are modelled in the initialization, local, and global parts. The pattern in Levy flights can be described by many relatively short steps (corresponding to the detection range of the searcher) that are separated by occasional longer jumps. Thus, generating step sizes according to a Levy distribution can be advantageous since the frequent short steps help in making more population members exploit the most promising regions and at the same time the rarely occurred long jumps help in keeping a portion of the members exploring the distant regions of solution space [46]. This can help in accelerating the convergence to the optimal solutions.

##### 2.4. Other State-of-the-Art Algorithms

As stated in the introduction (Section 1), the optimization algorithms that were tested on the recently proposed dynamic benchmarks [34, 35] include dopt-aiNet [1], PSO [1], rPSO [38], rGA [38], Memory-based EP [36], CPSO [38], DASA [10, 37], and MDE-LiGO [11]. Therefore, these algorithms were employed in the comparisons of this paper as the other state-of-the-art algorithms.

The dopt-aiNet is the artificial immune network algorithm designed for dynamic optimization. The dopt-aiNet maintained the diversity by detecting the redundant solutions and removing the worst ones, replacing them and inserting newly generated solutions. To rapidly locate the nearest local optimum, the Gaussian mutation in opt-aiNet was modified, where the step size is automatically calculated using a Golden Section Procedure. The dopt-aiNet employed a small and constantly changing population.

The PSO is the standard PSO without restart. The rPSO (PSO with restart) is the standard PSO, in which the population is reinitialized when an environmental change is detected. The rGA is the standard genetic algorithm which reinitializes the population when a dynamic change is detected.

The Memory-based EP is the evolutionary programming algorithm with an ensemble of memories to address the dynamic optimization. The Memory-based EP proposed a dynamic strategy parameter to make the algorithm more suitable for the dynamic optimization. The algorithm enhanced the diversity of the population by using an ensemble of external memories to be used as the initial population for a new environmental change. When the memories cannot help and the population loses its diversity, the population is reinitialized except the best one.

The CPSO is a variant of PSO that employs a hierarchical clustering technique and a fast local search procedure to deal with the optimization problems. CPSO modified the learning strategy in standard PSO for the global search to cover as many promising local optima as possible and speed the local search. Then, CPSO used an adaptive multipopulation strategy by using a single-linkage hierarchical clustering technique to generate a suitable number of subswarms and track multiple peaks. The hierarchical clustering can be considered better than the traditional multipopulation method in the sense that it helps in producing the proper number of subpopulations [38].

The DASA is then differential ant-stigmergy algorithm. It is an ACO-based algorithm integrated with a differential graph representation. The DASA used a pheromone mechanism as a way of communication between ants. The pheromone mechanism is an automatic and dynamic means with negative and positive feedbacks imitating the short-term and long-term memories [60].

The MDE-LiGO is a modified variant of DE that utilized locality-induced genetic operators. The MDE-LiGO maintained the diversity by extending the traditional mutation and crossover operators of DE to locality-based operators to retain the local traits of the best parents in the offspring, thus helping in preserving the diversity of the population. The locality-induced operations guide the newly generated solutions to cover promising local regions throughout the search space. Then, the MDE-LiGO performed Fuzzy* C*-Means (FCM) clustering method to divide the population into distinct local regions. The best solutions from each cluster are retained and the others are reassigned (i.e., reinitialized) in the search space for a new environmental change to cover new potential regions. FCM can be considered better than the hard clustering because it is especially suitable for the overlapped datasets that are common in the optimization problems that have many basins of attractions [11].

#### 3. The PLBA and Other BA Variants for Dynamic Optimization

In the PLBA and other BA-based algorithms used in the comparisons, to deal with the dynamic changes, the solutions of the last iteration before the detection of the change are reevaluated and used as a good start for the next change. In addition, the parameters that are related to shrinking procedure,* ngh* in Shrinking-based BA and in the PLBA, are reinitialized each time a change is detected. In the case of dimensional change, in addition to the reevaluation of the last solutions and the reinitialization of the shrinking parameters, the solutions are resized. Therefore, it can be easily observed that the PLBA and other BA-based algorithms are used without any modification except for the reevaluations of the solutions when the change is detected and the reinitialization of the shrinking parameters.

#### 4. Experimental Setup and Results

##### 4.1. Experimental Setup

###### 4.1.1. Benchmark Functions

The performance of the PLBA is evaluated in the dynamic environments using 6 dynamic multimodal benchmark problems. These problems were generated by the GDBG system that was proposed by Li and Yang [33] and provided for CEC’2009 competition on evolutionary computation in dynamic environments. According to Li et al. [34], the control parameters of the GDBG system can undergo 7 change types: (small step change), (large step change), (random change), (chaotic change), (recurrent change), (recurrent change with noise), and (random change with dimensional change). Detailed information regarding the framework of these changes can be found in the description given by Li et al. [34, 35].

The summary of the employed benchmarks is given in Table 1. The first benchmark problem is the dynamic rotation peak function with 10 and 50 peaks. Thus, two instances of are used. The other five test problems () are dynamic composition benchmark functions. Each problem instance undergoes the seven change types (); thus, a total of 49 test cases are examined. The most important parameters of the test problems were set as follows: dimension in the case of unchanged dimension and in the case of dimensional change, the search space () , the change frequency (*frequency* or Max_FES/change) = , and the number of changes per run (num_change/run) = 60. The remaining parameters can be found in the description given by Li et al. [34].

###### 4.1.2. Performance Evaluation

The performance of PLBA in the dynamic environments is evaluated on 49 test cases that result from testing seven change types with each problem instance out of the seven problem instances where the first problem has two instances, as can be seen in Table 1. For each test case, the average best (Avg_best), the average mean (Avg_mean), and the standard deviation (STD) of the absolute error in the function value over 20 runs are calculated as follows:where is the absolute error in a function value for each change after reaching the maximum number of evaluations per change (Max_FES/change). The Max_FES/change = 10,. and are the best solution and global optimum, respectively, at time . Additionally, is the total number of runs of the considered algorithm.

The average best and average mean of the absolute error values are used to ascertain the solution quality obtained by the PLBA on the dynamic problems. The average best gives an idea of how close the algorithm got from the global optimum during the whole environmental change type [1], whereas the average mean provides an indication of how close the algorithm was from the global optimum during the entire optimization process [1].

To evaluate the performance of the PLBA in terms of both solution quality and convergence speed, the overall performance of PLBA is measured according to Li et al. [34] as follows:where is the marking measurement of the performance of the algorithm on the th test case among 49 test cases. The maximum mark values (), as concluded from the description given by Li et al. [34], are 1.5, 2.4, 1, and 1.6 for () in , () in (), in , and in (), respectively. The corresponding percentages are 0.015, 0.024, 0.01, and 0.016, respectively. Based on these percentages, the is defined aswhere is defined as follows:where is the relative value of the best solution to the global optimum for each change after reaching Max_FES and for maximization problems and for minimization problems, whereas is the relative value of the best solution to the global optimum at the th sampling during one change, and , where is the sampling frequency and according to Li et al. [34].

Additionally, the median performance of the relative value () of the best solution () to the global optimum () is depicted on convergence graphs for each problem for total runs with termination after Total_FES over num_change changes.

###### 4.1.3. Parameter Settings

The performance of PLBA is compared with that of Basic BA and Shrining-based BA in the dynamic environment. In addition, an additional set of experiments is conducted to compare the performance of PLBA with other variants of the current state-of-the-art population-based algorithms. The parameter settings of these algorithms can be found in their references.

In order to perform a fair comparison among the BAs, the three versions of BA are executed with the same setting for the common parameters: for the number of scout bees, for the number of selected sites, for the number of elite sites, for the number of recruited bees for each site of the sites, and for the number of bees recruited for every site of the remaining () sites. In addition, the parameters relevant to each version are set to different values for different problems, as shown in Table 2.

##### 4.2. Experimental Results

Because no results regarding the mark scores of MDE-LiGO were reported by Mukherjee et al. [11], the MDE-LiGO was not included in the comparisons of the overall performance (see Table 3) and the comparisons among the algorithms in terms of both the solution quality and convergence speed (see Tables 15 and 17), which are indicated by the mark scores. The MDE-LiGO was employed in the comparisons in terms of the solution quality only (see Tables 14 and 16) because of the availability of its average means in Mukherjee et al. [11], which give indication of the quality of the solutions. Table 3 presents a summary of the overall performance of the PLBA, Shrinking-based BA, Basic BA, dopt-aiNet, PSO, rPSO, rGA, Memory-based EP, CPSO, and DASA. In addition, the overall performance and mark scores of these algorithms on each test case are given in Tables 22–30. Tables 4–9 show the results of the average best, average mean, and standard deviations of the error values achieved for each test case by PLBA, Shrinking-based BA, and Basic BA over 20 runs. Figures 1–3 present the convergence graphs of PLBA, Shrinking-based BA, and Basic BA, respectively, on the tested problems. In these figures, relative values () are presented for each change type () as , where and .

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

**(g)**

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

**(g)**

**(a)**

**(b)**

**(c)**

**(d)**

**(e)**

**(f)**

**(g)**

###### 4.2.1. Analysis of the Results of the PLBA

From the overall performance and mark scores in Table 22 and from Figure 1, it can be seen that the PLBA algorithm achieved the best performance on (the rotation peak problem), followed by (the composition Ackley problem) over all change types. On the other hand, the PLBA algorithm had the worst performance on (the composition Rastrigin). For this function, all the compared algorithms had the worst performance on this function and they could not track the optimum of this function. This is may be attributed to the difficulties that are exposed by the surface of this function in static environments [61] that make it more complicated in the case of dynamic environments. Nevertheless, PLBA performed better than some algorithms such as dopt-aiNet, CPSO, rPSO, Shrinking-based BA, and Basic BA on this function.

For other problems, the performance of the PLBA algorithm is fairly well and can be considered competitive with other optimization techniques. Furthermore, it can be seen in Table 22 and Figure 1 that although the dimensional change is the most challenging scenario, the PLBA performed fairly well on this change type in all benchmark problems and better than other algorithms such as Basic BA, dopt-aiNet, and rPSO, as can be seen in Tables 24–26.

By investigating the results in Tables 4 and 5 and in Figure 1, it can be seen that the PLBA performed well on (the rotation peak function) and (the composition Ackley function) over all change types. This confirms what has been pointed out that PLBA had the best performance on these two functions. For (the composition sphere function) and (the hybrid composition problem), the PLBA presented similar behaviors and obtained good results in change types (the small change), (the chaotic change), (the recurrent change with noise), and (the random change with changed dimension). On the other hand, PLBA struggled to find the optimum in other change types, especially the recurrent change () where the average mean of this change type was very high compared to other types.

Concerning the composition Griewank function (), although PLBA was able to produce good results in some change types, it still struggled to search for the optimum in some change types, especially (large step change), (random change), and (recurrent change) where their average means were very high compared to other change types. Regarding the most difficult function, which is the composition Rastrigin function (), although PLBA was able to reach the global optimum or stay close to it in its best cases (Avg_best results) in almost all change types, PLBA, on average, performed badly in all change types except for the small change type where its average mean was very low compared to other change types as expected, because of the simplicity of this change type.

An interesting finding is that although it is quite obvious for the problems to be solved smoothly in the case of the small step change, it is not natural for an algorithm to handle the chaotic or dimensional changes. Nevertheless, PLBA was able to handle these change types as well. It can also be observed that PLBA performed fairly well on , even though it is a hybrid composition function, as can be seen in the Avg_best results in all change instances and in the Avg_mean in most change types.

###### 4.2.2. Comparisons among the BAs

By comparing the overall performance of all BA-based algorithms, it can be clearly seen in Table 3 and Figures 1–3 that the Basic BA is much worse than the PLBA and Shrinking-based BA. On the other hand, the PLBA achieved the best performance. However, although the overall performance of PLBA is higher than that of Shrinking-based BA, the difference is not significant and the results are generally comparable, as can be seen in Figures 1 and 2 and as will be observed in the statistical analysis in Tables 12 and 13. This may be due to the dynamic change that is caused by the shrinking procedure and is suitable to the dynamic environment to track the optimum. In addition, the number of dimensions () that is not very large may contribute to this comparable performance of Shrinking-based BA with PLBA. By examining the results of test cases, it can be observed in Tables 4 and 6, Tables 5 and 7, Tables 22 and 23 and in Figures 1(d) and 2(d) that PLBA achieved much better performance than Shrinking-based BA in all change types on the composition Rastrigin function (), which is the most difficult problem. As mentioned before, this function exposes difficulties in the static environments, and it is expected to expose more difficulties in the dynamic environment.

The results of the average means and mark scores of test cases were analyzed statistically to test the significance of the results in terms of solution quality only and in terms of both solution quality and convergence speed, respectively. To statistically analyze the results produced by PLBA and other BA versions, the Friedman test was used. The Friedman test is the best-known statistical method for testing the performance differences between more than two algorithms [62]. Tables 10 and 11 tabulate the ranks achieved by this test for the three BA-based algorithms using the average mean and mark score results, respectively. It can be clearly seen in these tables that PLBA ranked first, followed by Shrinking-based BA in terms of solution quality only or in terms of both solution quality and convergence speed. The values calculated using the Friedman test (0) showed a highly significant difference among the BA-based algorithms.

Subsequently, Holm and Hochberg post hoc tests were used to detect the concrete differences among the BA-based algorithms. Tables 12 and 13 depict the adjusted values obtained by the pot hoc tests considering the PLBA as the control method using the average mean and mark score results, respectively. As can be seen in these tables, the Holm and Hochberg test strongly suggested a highly significant difference in the performance between the PLBA and the Basic BA in terms of solution quality only or in terms of both solution quality and convergence speed. On the other hand, no significant difference was found between PLBA and Shrinking-based BA. This confirmed the previous conclusion that has been done based on the overall performance.

###### 4.2.3. Comparisons with Other State-of-the-Art Algorithms

From the overall performance in Table 3, it can be obviously noticed that the standard PSO without any modification had the worst performance and the DASA achieved the best performance. It can also be observed that the PLBA performed better than rPSO, rGA, and dopt-aiNet and performed worse than CPSO, Memory-based EP, and DASA in the dynamic environment. A point of interest to note is that the PLBA achieved better performance than the dopt-aiNet that was designed especially to deal with DOPs and included all the desirable features to deal with these problems such as diversity generation, diversity maintenance, memory usage, and multipopulation capability.

It is also worth noting that although CPSO outperformed PLBA, PLBA performed better than CPSO on (the composition Rastrigin). PLBA also performed better than dopt-aiNet on (the composition Ackley problem) and on (the hybrid composition function) in all change types (see Tables 22, 25, and 29).

There is an interesting finding that the Basic BA performed much better than the standard PSO, with total mark score of 13.15165 and 0.0014, respectively. Basic BA may take advantage from its better diversity by keeping a large portion of the population scouting for new promising solutions. Another interesting finding is that PLBA achieved good results although it does not have all the required features in the dynamic environments such as the usage of memory and multipopulation capabilities. However, as mentioned before in Section 1, features such as the patch concept and Levy flights help in guiding the search in different promising areas along the search space. The patch environment in the initialization part helps in spreading the solutions out along the search space. The Levy flights with a suitable search size in the initialization and global search parts enhance PLBA to maintain diversity because of the rare long jumps. In addition, at the same time, the greedy local search based on Levy flight with a suitable small search size reduces the length of the long steps of Levy flight that work together with the frequent short steps on exploiting the new regions and thus finding the new optimum.

The Friedman test was also used to test the global differences in the results obtained by PLBA and the other state-of-the-art algorithms in terms of solution quality only and in terms of both solution quality and convergence speed. Tables 14 and 15 depict the ranks calculated through the Friedman test employing the average mean and mark score results, respectively. As mentioned above in Section 4.2, the results of MDE-LiGO were not included in the comparisons of the mark scores, which give indication of both the solution quality and convergence speed. On the other hand, the MDE-LiGO was considered in the comparisons of the average means, which give indication of the solution quality only. This was because there were no results of the mark scores reported for the MDE-LiGO, as mentioned in Section 4.2. Therefore, including MDE-LiGO, it can be clearly seen that MDE-LiGO was the best performing algorithm and CPSO was the second best performing algorithm considering only the solution quality. Considering both solution quality and convergence speed, DASA was the best algorithm and CPSO was the second best algorithm, given that the MDE-LiGO was excluded for the reason explained above. The different ranks of CPSO and DASA in the two cases showed that the CPSO achieved better results than DASA in the dynamic environments but DASA was faster than CPSO. In both cases, rPSO was the worst performing algorithm. For the PLBA, it ranked the fifth in the first case and the fourth in the second case. The values calculated using the statistic from the Friedman test (0) strongly suggested a significant difference among the algorithms considered.

Subsequently, Holm and Hochberg methods were used to detect the concrete difference between the best performing algorithm and the rest of the algorithms. Table 16 shows the adjusted values obtained by Holm and Hochberg methods considering MDE-LiGO (the best algorithm considering only the solution quality) as the control method, whereas Table 17 presents the adjusted values considering DASA (the best algorithm considering both solution quality and convergence speed) as the control method. It can be clearly seen in Table 16 that Holm and Hochberg test strongly suggested a significant difference between MDE-LiGO and each of the rest of the algorithms in terms of solution quality at significance level . Considering both solution quality and convergence speed, Holm and Hochberg test strongly suggested a significant difference between DASA and each of the remaining algorithms. It should be noted that the worst performing algorithm was actually PSO or Basic PSO, but because there is no detailed results reported for it by De França and Von Zuben [1], it was not included in the statistical comparisons.

Two additional sets of statistical comparisons were conducted among the PLBA and the algorithms that were outperformed by PLBA employing the average mean and mark score results. The aim was to check if there is a significant difference among them, in general, and between PLBA and each of these algorithms, in particular. Regarding the ranks, it can be clearly seen in Tables 18 and 19 that PLBA ranked first, as can also be seen from the previous rankings where PLBA ranked before these algorithms. The values computed based on the ranks calculated by the Friedman test (0) suggested a highly significant difference among the algorithms. Considering only the solution quality, the Holm and Hochberg tests showed a highly significant improvement of PLBA over rPSO at a significance level and over rGA with a level of significance , whereas no significant difference was found between dopt-aiNet and PLBA, as can be seen in Table 20. On the other hand, considering both solution quality and convergence speed, PLBA showed a significant improvement over all other considered algorithms, including dopt-aiNet, with a level of significance , as can be seen in Table 21.

In general, the superiority of an algorithm over other algorithms can be accounted for by its better diversity and its ability to adapt to environmental changes because of the strategies and features that characterize this algorithm to address the DOPs. For example, it can be noticed that MDE-LiGO was the best performing algorithm and rGA, rPSO, and standard PSO were the worst performing algorithms in terms of solution quality. This can be explained by the notion that the MDE-LiGO integrated features that have advantages over the features of the other algorithms such as the FCM clustering that is considered better than hard clustering in CPSO as pointed out in Section 2.4. It also retained the traits of the best individuals in offspring, thus helping in increasing the diversity of the population. On the other hand, PSO might face the difficulty of diversity loss due to the attraction of the global best particle to all particles causing premature convergence on local optima [38]. For the rGA and rPSO, they represented the simple and standard versions of GA and PSO, respectively, with only a reinitialization mechanism to maintain the diversity when an environmental change is detected.

In addition, it can be observed that PLBA achieved better results than rGA, rPSO, and standard PSO. This can be accounted for by the patch concept, Levy motion, and the shrinking parameters, in addition to the replacement of worst solutions (i.e., random immigrant scheme), where all these features can help in maintaining the diversity as explained above in this section and in Section 1. Keeping the last solutions of the previous dynamic change to be used as the initial solutions for the next change might contribute to the good performance of PLBA over the rGA, rPSO, and PSO. The PLBA also performed better than dopt-aiNet, although it was designed especially for dynamic environments. The lower performance for the dopt-aiNet, which is a modified variant of another dopt-aiNet, compared with the PLBA might be due to the modifications that have been done to the original dopt-aiNet in order to reduce the computation complexity of the algorithm. The modifications included the removal of some mutation operators and the limitation of the search for the step size of the Gaussian mutation to a specific range and to a low number of function evaluations. In addition, the memory population was not used and the number of clones was reduced to one clone per cell. These modifications might contribute to reducing the computation cost, but on the other hand they might reduce the diversity of the algorithm. The nature of the algorithm and its procedure can be suitable for dynamic optimization such as DASA that has dynamic and adaptive pheromone mechanism that can act as short-term and long-term memories, as described in Section 2.4. Generally, the contribution of a specific strategy to the diversity maintenance depends on the way of its implementation and affected by other schemes integrated with it.

#### 5. Conclusion

This paper introduced a recently proposed version of Bees Algorithm (PLBA) for solving DOPs and investigated its performance on a set of dynamic multimodal benchmark problems with environmental changes of different degrees of difficulties. In addition, the performance of PLBA was compared with other BA versions, and with other state-of-the-art algorithms found in the literature. The results have shown that PLBA could track the optimal solution and give reasonable solutions most of the time. The comparisons among BA-based algorithms showed that PLBA outperformed Shrinking-based BA and Basic BA. However, although the PLBA was better than Shrinking-based BA on the dynamic problems, the results were generally comparable. Therefore, it could be concluded that the dynamic change of the step size is a desirable characteristic of an algorithm to track the changed optimal solution. The experiments have also indicated that PLBA performed much better than the standard PSO, rPSO, rGA, and dopt-aiNet algorithms on dynamic optimization and was competitive with others on some results.

There as an interesting note that, apart from the reevaluation of the last solutions of the previous change and the reinitialization of the shrinking parameter, no modifications were needed for PLBA to deal with dynamic environments and achieve promising results. The patch environment, Levy flights, and keeping a large portion of the population scouting were good capabilities to deal with the tested DOPs. The inclusion of other features that are desirable for dealing with dynamic environments may enhance the performance of PLBA much more on the dynamic optimization.

#### Appendix

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

The authors would like to thank the Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, for providing facilities and financial support under Fundamental Research Grant Schemes nos. GP-K007984, PRGS/1/2016/ICT02/UKM/02/1 entitled “Intelligent Vehicle Identity Recognition for Surveillance,” DIP-2015-023 entitled “Object Descriptor via Optimized Unsupervised Learning Approaches,” and FRGS/1/2016/ICT02/UKM/02/10 entitled “Commute-Time Convolution Kernels for Graph Clustering to Represent Images.”