Abstract

Laplacian Biogeography-Based Optimization (LxBBO) is a BBO variant which improves BBO’s performance largely. When it solves some complex problems, however, it has some drawbacks such as poor performance, weak operability, and high complexity, so an improved LxBBO (ILxBBO) is proposed. First, a two-global-best guiding operator is created for guiding the worst habitat mainly to enhance the exploitation of LxBBO. Second, a dynamic two-differential perturbing operator is proposed for the first two best habitats’ updating to improve the global search ability in the early search phase and the local one in the late search one, respectively. Third, an improved Laplace migration operator is formulated for other habitats’ updating to improve the search ability and the operability. Finally, some measures such as example learning, mutation operation removing, and greedy selection are adopted mostly to reduce the computation complexity of LxBBO. A lot of experimental results on the complex functions from the CEC-2013 test set show ILxBBO obtains better performance than LxBBO and quite a few state-of-the-art algorithms do. Also, the results on Quadratic Assignment Problems (QAPs) show that ILxBBO is more competitive compared with LxBBO, Improved Particle Swarm Optimization (IPSO), and Improved Firefly Algorithm (IFA).

1. Introduction

Optimization has always been a hot topic for various researchers, engineers, and others. In solving some complex problems, traditional optimization methods mainly rely on empirical analysis or the use of accurate mathematical models. However, they cannot find the optimal solution on other problems or within a reasonable time. In recent decades, inspired by nature, many Intelligent Optimization Algorithms (IOAs) such as Particle Swarm Optimization (PSO) [1], Shuffled Frog Leaping Algorithm (SFLA) [2], Differential Evolution (DE) [3], Krill Herd (KH) [4], Harmony Search (HS) [5], Cuckoo Search (CS) [6], Grey Wolf Optimizer (GWO) [7], and Biogeography-Based Optimization (BBO) [8], have sprung up, and they are widely used in many areas [6, 911].

BBO is a biogeography-based IOA proposed by Simon [8]. The mathematical models of biogeography describe the migration and mutation of species. According to the models, two operators are created in BBO. The migration operator of BBO shares information between habitats to improve the quality of poor solutions, and it demonstrates the exploitation ability of BBO. The mutation operator, which randomly generates new feature values, can get good population diversity to reflect the exploration ability. As BBO has a good model, simple search mechanism, and excellent performance [8], not only has it achieved great success on numerical optimization problems [12] but also is being used in so many applications [1316]. Compared with some IOAs, BBO is more competitive and has attracted more widespread attention [17]. However, BBO has some defects such as easy entrapment into local optima and weak exploration [18].

Quadratic Assignment Problem (QAP) is a mathematical model for the location of indivisible economic activities, which is one of the most difficult combinational optimization problems [19]. QAP has been studied for several years in an assignment problem that models a variety of real-world problems, such as backboard wiring problem, campus planning problem, airport gate assignment, and traveling salesman problem [20]. Hospital layout problem is a typical QAP to minimize the total travel distance of patients. It can bring better service to patients by renovating the existing hospital and optimizing the reallocation of the departments, reduce the time consumption of each patient, and improve the efficiency of hospital service for more patients [20]. IOAs are devoted to the search for the good quality solutions, which are applied to solve QAP [19].

Although BBO variants are proposed to deal with the defects of BBO, whether BBO or its variants, the migration operator plays an important role. Garg and Deep [21] proposed a novel BBO based on the Laplace migration operator (LxBBO), which is an improved migration operator to strengthen the search capability of BBO. However, LxBBO still has some drawbacks such as poor performance, weak operability, and high complexity. So, in this paper, an improved LxBBO (ILxBBO) is proposed, and it is used to solve QAP: an alternative approach for locating hospital departments.

The contributions of this paper are described as follows:(1)A dynamic two-differential perturbing operator is proposed to update the first two best habitats and used mainly to enhance the exploration in the early search phase, and the local search ability can be improved in the late search phase(2)A two-global-best guiding operator is presented to update the worst habitat and used mainly to enhance exploitation; at the same time, the global search ability can be improved in the early search phase(3)An improved Laplace operator is formulated for the other habitats’ updating to fasten the convergence speed and improve the operability(4)ILxBBO is used for complex function optimization on CEC-2013 and applied to QAP; a lot of experimental results show ILxBBO obtains more performance than the comparison algorithms.

The graphical abstract of this paper is shown in Figure 1.

The rest of this paper is organized as follows: Section 2 gives the related work. The proposed ILxBBO is elaborated in Section 3. In Section 4, experimental results on the CEC-2013 test set and QAP are reported and analyzed. Section 5 gives conclusions and future work.

2.1. Biogeography-Based Optimization

BBO mainly uses migration and mutation models of species in biogeography to solve optimization problems. In BBO, each solution is called a “habitat” with a Habitat Suitability Index (HSI) to measure the quality of the habitat. The factors of a habitat that characterize habitability are called Suitability Index Variables (SIVs). BBO searches the best solution mainly based on migration and mutation steps.

2.1.1. Migration Operator

In BBO, a good solution tends to have a high HSI and it is analogous to a habitat with many species, which has high emigration and low immigration rates, and vice versa. The purpose of the migration operator is to share information between different solutions. Good solutions tend to share their features with poor solutions, and poor solutions accept a lot of features from good solutions. Each habitat has its own immigration rate λ and emigration rate μ, and they are calculated as follows:where I is the maximum immigration rate, E is the maximum emigration rate, Nk is the number of species of the habitat Hk, and N is the maximum number of species. From equations (1) and (2), the migration presents a simple linear model, but more often, there are complex nonlinear models [22].

The migration operator modifies the habitat’s SIVs by accepting features from other good habitats, which can be expressed as follows:where Hi is the immigration habitat and Hk is the emigration habitat, which is selected by the roulette wheel selection [8].

2.1.2. Mutation Operator

Sudden events may drastically alter certain characteristics of a habitat, thereby changing its HSI and causing a significant change in the number of species. The mutation rate of the habitat is inversely proportional to the species probability in BBO. The mutation rate mi is calculated by the probability pi of the species number, and its expression is as follows:where mmax is the maximum mutation rate, which is a user-defined parameter, the computation way of pi refers to [8], and pmax = max (pi). The mutation can be conducted below:where Hi is the mutation habitat, j ∈ [1, D] (where D is the decision variable number), lbj and ubj are the lower and the upper boundary values of the jth SIV of Hi, respectively, and rand is a uniformly distributed random real number between 0 and 1.

In order to save the best solutions in the search process, the elitism strategy is used to keep some best solutions. At each iteration, after operations such as migration and mutation are carried out, the population is sorted. The several worst habitats are replaced with some elitists kept before, and the population is sorted again. The steps of BBO are given as follows:Step 1: set the parameters and initialize the population randomlyStep 2: evaluate each habitat and sort the population from the best to the worst by their HSIsStep 3: calculate the immigration, emigration, and mutation rates and keep the elitistsStep 4: perform the migration operator by equation (3)Step 5: perform the mutation operator by equation (5)Step 6: limit each new solution’s boundaryStep 7: calculate each habitat’s HSI and sort the population from the best to the worstStep 8: replace some worst habitats with the elitistsStep 9: sort the population again from the best to the worstStep 10: decide whether the stopping criterion is metStep 11: if it is so, output the best solution; otherwise, return to Step 3

According to the steps of BBO, BBO can get strong local search capability through migration and global search ability by mutation. However, there are some drawbacks in BBO. For example, the migration operator simply provides new solutions through directly copying some features of good solutions and cannot generate new features from new promising areas in the search space. The mutation operator affects the accuracy of the solution in the later search of algorithm and so on.

To remedy BBO’s defects, great efforts have been made to improve BBO by many researchers. The improvements are mainly divided into the following aspects. (1) Topology. Zheng et al. [23] used three different topologies, namely, ring topology, square topology, and random topology, to enhance the exploration ability of BBO. Feng et al. [24] presented a hybrid migration operator with random ring topology to enhance the potential population diversity of BBO. (2) Migration operator. Ma and Simon [25] proposed a blended migration operator, in which a new solution consists of two parts, the features of other solutions and its own features. Xiong et al. [18] presented a polyphyletic migration operator to raise the population diversity of BBO, and an orthogonal learning strategy was used to make a systematic and elaborate search. Li et al. [22] presented a perturbed migration in order to enhance the exploration ability and integrated the Gaussian mutation into BBO to improve the exploration. Chen et al. [26] put forward the covariance matrix for reducing the dependence on a coordinate system and enhancing BBO’s rotational invariance. Zhang et al. [27] presented an Efficient and Merged BBO (EMBBO) to enhance the optimization efficiency. (3) Mutation operator. Gong et al. [28] embedded Gaussian, Cauchy, and Levy mutations into BBO, respectively. Lohokare et al. [29] proposed a mutation operator which combined two individuals to generate a new feasible solution to improve the exploration ability. (4) Hybrid BBO with other IOAs. Gong et al. [30] combined the exploration of DE with the exploitation of BBO to enhance the performance. Savsani et al. [31] integrated BBO with Artificial Immune Algorithm (AIA) and ACO, respectively, and proposed four mixed BBOs. Khademi et al. [32] combined the Invasive Weed Optimization (IWO) with BBO to enhance the performance of BBO. Zhang et al. [33] combined BBO and GWO to obtain a BBO with strong universal applicability. In addition, many improvements are from not only one way but many ones to maximize the performance of BBO at present. Therefore, further improvements to the BBO variant are necessary.

2.2. Laplacian Biogeography-Based Optimization

Garg and Deep proposed LxBBO based on the Laplace crossover to improve the optimization performance of BBO [21]. The Laplace crossover is described as follows. There are two parents, x1 the immigration habitat,x2 the emigration habitat that is selected by the roulette wheel selection. A random number (β) is generated, and it follows Laplace distribution, which is given by the following equation:where a ∈ R is called the location parameter and b > 0 is called the scale parameter. Then, two new offsprings are generated as

The two new habitats are blended to make a new habitat y (see equation (9)) with a blended parameter γ given by equation (10):where γmin and γmax are two parameters which are the minimum and maximum values of γ, respectively, and these two parameters lie in [0, 1], and t is the current iteration number. k is a user-defined parameter and less than 1.

From equations (7) and (8), the difference between the two equations lies in the first item of them; the first term of equation (7) is x1, while the first one of equation (8) is x2. The value of γ is getting smaller as t increases in equation (10) (see Figure 2(a)). From equation (10), in the earlier search phase, the value of γ is larger, so y is mostly affected by the offspring y1; the difference of x1 and x2 is larger, and the search range around x1 is larger, so the algorithm has stronger exploration. In the later search phase, the value of γ is smaller, so y is mostly affected by the offspring y2; the difference between x1 and x2 is smaller, the search range around x2 (good position) is smaller, and the algorithm has stronger exploitation.

In LxBBO, except the migration operator is replaced by the Laplace operator, the rest is the same as in BBO, for example, both use the mutation and the elitist strategy. LxBBO uses the immigration and emigration habitats as a parent to generate two offsprings and then get a new habitat. This way can generate new positions from new promising areas in the search space to enhance the optimization performance of BBO.

3. Improved Laplacian Biogeography-Based Optimization

3.1. Defects of Laplacian Biogeography-Based Optimization

Although LxBBO enhances the performance of BBO largely, there are some drawbacks. (1) In the Laplacian operator, there are many parameters. From equations (6) to (10), a, b, γmin, γmax, and k need tedious tuning in various applications. In the experiment, according to [21], k is set to 0.95, and the power of the difference between the maximum value and the minimum value is calculated at each iteration, which results in some computation cost. (2) LxBBO and BBO share some of the drawbacks. For example, when the emigration habitats are selected by the roulette wheel selection, a poor habitat may be chosen to share its information with a good habitat to reduce the quality of the population and the roulette wheel selection has high calculation complexity. (3) Both BBO and LxBBO use the mutation operation. Although the mutation operator can enhance the global exploration ability, it is possible to mutate some better habitats and destroy them, causing population degradation and affecting the convergence quality especially in the late search phase. (4) The mutation operator needs to compute the mutation rate of each habitat and complete the mutation operation. (5) The elitist strategy is used, and the population needs to sort twice at each iteration, which results in high computation complexity. In order to solve the above drawbacks, several creative improvements are brought up in this paper.

3.2. Improved Laplace Operator

To enhance the performance and operability of LxBBO, an improved Laplace operator is proposed.

Inspired by the optimal idea of [34], when an individual (namely, He) with better fitness subtracts an individual with poor fitness (namely, Hk) (see equations (11) and (12)), it will search toward the good individual to accelerate the convergence speed. The random number β is calculated by equation (13), and the two habitats H1 and H2 are generated by equations (11) and (12). Compared with equations (7) and (8), the difference is that the emigration habitats of equations (11) and (12) are selected by the example learning selection [33]. He has a better fitness value than Hk does, using the difference between He and Hk to ensure that the search direction is closer to the better solution, and the convergence quality is improved:

There are many parameters to be set and much complexity in equation (10), so a new dynamic weight parameter γ is adopted. It is expressed as equation (14), and the difference of γ in equations (10) and (14) is shown in Figure 2. Figure 2(a) represents the curve of r in LxBBO, and Figure 2(b) represents the curve of r in ILxBBO. From Figure 2, γ is getting smaller as t increases, and it keeps an almost constant value (0.1) after about 100 iterations in LxBBO. However, γ increases linearly with t in ILxBBO, when t = 0 and γ = 0.5, and when t = MaxDT, γ = 1, so γ is a linear number between 0.5 and 1. No parameters need tuning and that makes ILxBBO get stronger operability by using the dynamic weight γ:where MaxDT is the maximum iteration number. H is given by the following equation:

From equation (15), in the earlier stage, H accepts the information from H1 and H2 to increase the diversity. In the later stage, H is more affected by H2 to enhance the exploitation and the convergence speed.

3.3. Improved Migration Operator

In LxBBO, the emigration habitats are selected by the roulette wheel selection like BBO, and good solutions may get the features from poor solutions, so the population may be degenerated. In order to overcome the drawbacks of the roulette wheel selection, the example learning selection [33] is adopted to replace the roulette wheel selection. The selection approach is described as follows. When the population is sorted from the best to the worst, the k-th habitat has a higher value than the one behind the k-th habitat does. When Hk is selected for migration, there are k − 1 habitats for it to learn from. The emigration habitat He is calculated as follows:where ceil () is the function which rounds toward positive infinity. From equation (16), He is not worse than Hk, which improves the quality of the solution by sharing features from good solutions. Furthermore, ILxBBO only needs to calculate the immigration rate and does not need to calculate the emigration rate, which reduces the computation complexity further. The emigration habitat can be selected by only equation (16). The example learning selection overcomes the defects of the roulette wheel selection, and its calculation is simple to reduce the computational complexity. The improved Laplace operator and the improved migration operator together form an improved Laplace migration operator.

3.4. Dynamic Two-Differential Perturbing Operator

DE, proposed by Storn and Price in 1997 [3], is a popular optimization algorithm. It generates temporary individuals based on the degree of individual difference in the population and realizes evolution by random recombination. In the earlier search period, the individuals have more difference in the population, and the algorithm can search in a large range, thus obtaining global exploration ability. In the later search period, the algorithm searches in the vicinity of the individual and obtains local search ability [35]. A dynamic differential evolution algorithm is proposed by Wu et al. [36], and the experiments show that the dynamic method has better performance.

LxBBO can get a global exploration ability by its mutation operator. However, the mutation operation is at random, and it is easy to destroy the better solutions, especially in the late search stage. In addition, the mutation operation needs to calculate the mutation rate of each habitat, along with its own calculation, and those increase the computation complexity. So the mutation operator is removed in ILxBBO. In Section 3.3, the emigration habitats are selected by the example learning approach and poor habitats can accept their features from good habitats. However, some best habitats cannot be updated because there are few examples for them to learn from leading to low search efficiency. Although the best habitat may be the example of the second habitat, there is little difference between the two habitats in many cases, the second one is almost unchanged, and the search efficiency is also low. So a dynamic two-differential perturbing operator is adopted in the best habitat and the second best habitat to enhance the search ability. The value of the scaling factor () of the dynamic two-differential perturbing operator decreases with the increase in the current iteration number, which is expressed as follows:

The dynamic two-differential perturbing operator is expressed as follows:where Hk is the best habitat or the second best habitat, Hr and Hm are two habitats selected randomly from the current population, and they satisfy m ≠ k ≠ r. In the early earlier stage, the values of (Hm − Hr) and (Hb − Hk) and the value of are all comparatively large, so Hk searches in a larger range to enhance the global search ability. The values of (Hm − Hr) and (Hb − Hk) and the value of are comparatively small in the late stage, so Hk searches in a smaller range to enhance the local search ability. From equation (18), Hk is affected by itself, the dynamic coefficient factor, and the two differences. Therefore, it is called a dynamic two-differential perturbing operator.

3.5. Two-Global-Best Guiding Operator

In order to further enhance the local search ability, the best and subbest habitats are used to guide the worst habitat to update. In the first half of the search, the mathematical equation is expressed as follows:where Hs is the subbest habitat. In the second half of the search, equations (20) and (21) are used to update the worst habitat based on the same probability:

From equation (19), the worst habitat is affected by three aspects: the worst habitat itself, the random weighted (−1∼1) difference between the best habitat and the worst habitat, and the random weighted (−1∼1) difference between the subbest habitat and the worst. In the early search stage, the difference between the best habitat or the subbest habitat and the worst habitat is bigger. The range of 2  (rand − 0.5) is from −1 to 1. So the worst habitat searches around the wider range of itself to obtain some global search abilities. In the late stage, the worst habitat adopts equation (20) or equation (21) to update. They are similar to equation (19) but with different random weights. Under the guidance of the best and the subbest habitats, the worst habitat can obtain a local search ability by searching in a small range around itself. From equations (19)–(21), is all affected by the best habitat and the subbest habitat, so this operator is called two-global-best guiding operator.

3.6. Other Improvements

In addition, the following improvements are also adopted. First, the greedy selection [24, 37] replaces the elitism strategy [8]. So, on the one hand, the population is just needed to sort once at each iteration to reduce computing complex; on the other hand, the greedy selection avoids setting the elitist parameter. Second, the immigration rate calculation step is moved outside of the iteration loop. It means the immigration rate is calculated once in the whole iterations to reduce the computation complexity. The flowchart of ILxBBO is given in Figure 3.

From the above description, there are the following differences between ILxBBO and LxBBO. (1) In ILxBBO, the mutation operator is removed to omit calculating the mutation probability and the mutation operation to reduce the computation complexity. (2) The best habitat and the subbest habitat adopt a dynamic two-differential perturbing operator to update in ILxBBO, while the first two best habitats use the Laplace operator to update in LxBBO. (3) The worst habitat uses a two-global-best guiding operator to update, while the worst habitat also uses the Laplace operator in LxBBO. (4) The remaining habitats use an improved Laplacian migration operator in ILxBBO, while the Laplace operator is also used in LxBBO. (5) There are few parameters to be tuned in ILxBBO, while LxBBO has many parameters to be tuned in various applications. (6) LxBBO uses the roulette wheel selection, while the emigration habitat is selected by the example learning approach in ILxBBO. The population tends to be in the best direction, the example learning approach does not require calculating the migration rate, and the emigration habitat can be selected with only a small amount of calculation. (7) The elitist strategy is adopted in LxBBO, while the greedy selection is used instead of the elitist strategy in ILxBBO. It saves one sorting step to further reduce the computation complexity.

4. Experiment Results and Analysis

4.1. Experiment Preparation

In order to verify ILxBBO, a large number of experiments are made on the complex functions from the CEC-2013 test set [38], where f1f5 are unimodal functions, f6f20 are basic multimodal functions, and f21f28 are composition functions. All the experiments are implemented on PC with 3.1 GHz CPU and 4 GB RAM under a Microsoft Windows 7 operating system. The programming language is MATLAB R2014a.

To be fair, in all the experiments on the CEC-2013 test set, according to [38], the independent run number (Run) is 51, MaxDT dynamically adjusts based on the dimension of the problem, with 30-D and 50-D, MaxDT is 3000 and 5000, respectively, and maximum number of function evaluations (MaxNFES) is set to D 10,000. The population size (N) is 100 in LxBBO and ILxBBO. The maximum immigration rate I of ILxBBO is 1 like LxBBO. According to [21], the parameters used in LxBBO are as follows: the maximum emigration/immigration rate (E/I) = 1, mmax = 0.005, γmin = 0.1, γmax = 1, a = 0, b = 0.5, and k = 0.95.

We evaluate the statistical average (Mean) and standard deviation (Std) of these algorithms after some independent runs. For an algorithm, Mean represents its optimization ability and Std embodies its stability. The ranking criteria are described below: first, we compare each algorithm’s mean value on each function; the better the mean value is, and the better the ranking is. If some algorithms obtain the same mean values, then we compare their Std values; the better the Std value is, the better the ranking is. If some algorithms obtain the same Mean and Std values, their rankings are considered to be the same. In addition, the best values are in bold in all the result tables.

4.2. Comparison of ILxBBO with Its Incomplete Variants

To illustrate the effectiveness of each component of ILxBBO, ILxBBO is only compared with its incomplete variants and LxBBO on the 30-dimensional functions. These incomplete variants are described as follows:GLxBBO is an incomplete variant of ILxBBO, which is LxBBO with only the two-global-best guiding operator, without the dynamic two-differential perturbing operator and improved Laplace migration operatorDLxBBO is an incomplete variant of ILxBBO, which is LxBBO with only the dynamic two-differential perturbing operator without the improved Laplace migration operator and two-global-best guiding operatorOLxBBO is an incomplete variant of ILxBBO, which contains the improved Laplace migration operator without the dynamic two-differential perturbing operator and two-global-best guiding operator

The experimental results are shown in Table 1. Form Table 1, ILxBBO obtains 14 times ranking the first. LxBBO obtains 4 times ranking the first. GLxBBO obtains 9 times ranking the first. DLxBBO and OLxBBO obtain 1 time ranking the first. ILxBBO’s, LxBBO’s, GLxBBO’s, DLxBBO’s and OLxBBO’s average rankings are 1.79, 4.11, 2.32, 3.14, and 3.36, respectively. ILxBBO’s average ranking is significantly higher than those of its incomplete algorithms. The average ranking of LxBBO is the last. It shows that each improvement on LxBBO is effective and the two-global-best guiding strategy contributes most to ILxBBO. The average ranking of ILxBBO is the first, and it shows that every improvement in LxBBO is essential.

4.3. Comparison with BBO’s Variants

In this experiment group, ILxBBO is compared with quite a few state-of-the-art BBO’s variants on the 30-dimensional and 50-dimensional functions from the CEC-2013 test set. The comparison algorithms include TDBBO [39], BIBBO [25], BBOM [40], DEBBO [30], BLPSO [41], PRBBO [24], WRBBO [37], EMBBO [27], and BHCS [42]. These algorithms are all BBO variants proposed in recent years, with much comparability. Their common parameter settings are the same as those of ILxBBO, and other parameter settings are referred to their corresponding references. The experimental results on the 30-dimensional functions and 50-dimensional functions are shown in Tables 2 and 3, respectively.

From Table 2, ILxBBO obtains 9 times ranking the first and the optimal value (0) on f1 is obtained by ILxBBO. TDBBO obtains 1 time ranking the first. BIBBO, BLSPO, and BHCS obtain 0 times ranking the first. DEBBO obtains 4 times ranking the first, and both WRBBO and EMBBO obtain 5 times ranking the first. On the 5 unimodal functions, ILxBBO obtains 2 times ranking the first and 2 times ranking the second, and it shows that the improved migration operator and the two-global-best perturbing operator enhance the local search ability. On the 15 basic multimodal functions, ILxBBO obtains 6 times ranking the first and 3 times ranking the second, and it shows that the dynamic two-differential perturbing operator enhances the global search ability. On the 8 composition functions, ILxBBO obtains 1 time ranking the first. On average ranking, ILxBBO’s average ranking (2.36) is significantly higher than those of the comparison algorithms. These comparison results show that, in general, ILxBBO obtains the most significant optimization performance among these 9 optimization algorithms.

From Table 3, ILxBBO obtains 10 times ranking the first and gets the first average ranking (2.14). Its performance is just as good as that on the 30-dimensional functions.

4.4. Comparison with Other IOAs

To further verify the effectiveness of ILxBBO, ILxBBO is compared with other IOAs on the 30-dimensional functions. These algorithms include YYPO [43], DPCABC [44], DFnABC [45], FMPSO [46], GLPSO [47], HFPSO [48], and MEGWO [49], which are the latest and most famous IOAs and are competitive and representative. DPCABC and DFnABC are ABC variants, and MEGWO is a GWO variant. FMPSO is a PSO variant, and GLPSO and HFPSO are PSO hybrid algorithms with Genetic Algorithm (GA) and Firefly Algorithm (FA), respectively. For comparison algorithms, on the 30-dimensional functions, MaxFEs is 300,000, and Run is 51(except FMPSO is 30). The other parameters settings are referred to their corresponding references. The data of these comparison algorithms are directly from their corresponding to references. The experimental results are shown in Table 4.

From Table 4, ILxBBO obtains 9 times ranking the first, YYPO obtains 2 times ranking the first, DPCABC obtains 4 times ranking the first, DFnABC and GLPSO obtain 7 times ranking the first, FMPSO obtains 3 times ranking the first, HFPSO obtains 1 time ranking the first, and MEGWO obtains 0 times ranking the first. The average rankings of IlxBBO (2.71) is the first; the next is GLPSO, DFnABC, YYPO, FMPSO, DPCABC, HFPSO, and HFPSO. Therefore, this further verifies the optimization performance of ILxBBO.

4.5. Convergence Analysis

In order to highlight the difference between ILxBBO and LxBBO on convergence, Figure 4 shows the convergence curves of ILxBBO and LxBBO only on the 10-dimensional functions. To illustrate the point concisely, some representative convergence graphs are plotted on unimodal (f2 and f3), multimodal (f6, f9, f11, f13, f14, f17, f19, and f20), and composition functions (f21, f22, f24, f25, f27, and f28), respectively. From Figure 4, on f2, f3, f6, f11, f25, f27, and f28, ILxBBO’s convergence speed is much faster than LxBBO’s and the advantages of ILxBBO are prominent. On f9, f13, f20, and f22, in the early search stage, the convergence speed of ILxBBO is not as fast as LxBBO’s, but in the late stage, ILxBBO’ convergence speed is faster than LxBBO’s. On f14, f17, f19, f21, and f24, ILxBBO achieves nearly the same convergence speed as LxBBO does. On the whole, ILxBBO gets better convergence performance than LxBBO does. This verifies that the improved Laplace migration operator, example learning, and so on fasten the convergence speed of LxBBO.

4.6. CPU Runtime

In order to indicate the running time of ILxBBO, its average running time is recorded and analyzed only on the 30-dimensional functions after 51 independent runs. Figure 5 shows the average runtime comparison results between ILxBBO and 9 BBO variants. In Figure 5, the y-coordinate is the average runtime and its unit is “second”(s). From Figure 5, ILxBBO’s average runtime is the least (6.0200 s), and TDBBOs (7.4356 s), BIBBOs (11.0533 s), BBOMs (10.8169 s), DEBBOs (7.5540 s), BLPSOs (15.5514 s), WRBBO (10.6473), EMBBO (9.6159), BHCS (6.9128), and LxBBOs (11.4140 s) are 80.96%, 54.46%, 55.65%, 79.69%, 38.71%, 56.54%, 65.68%, 87.08%, and 52.74%, respectively. ILxBBO obtains the fastest speed owing to the adoption of several strategies such as example learning, mutation operation removing, and greedy selection which reduces one sorting step and so on.

4.7. Application to Quadratic Assignment Problem (QAP)

QAP is an NP-hard problem, which was first introduced by Koopmans and Beckmann [50], and it has been among the problems studied most in all of combinatorial optimization problems. QAP can be described as the problem of assigning a set of facilities to a set of locations with given distances between the locations and given flows between the facilities.

The approach is to have two matrices of size NN given by the following equations:where N = 1, 2, ..., n and fij is the flow or weight between each pair of facilities which represents the flow from facility i to facility j. dij is the distance between each pair of locations which represents the Euclidean distance from Location i to Location j. The objective function is given as follows:

QAP can be applied to many practical problems, such as modeling the location of hospital departments, optimizing the configuration of departments, reducing the time consumption of each patient, and improving the efficiency of hospital services for patients. For example, a simple example, in a single layer of a hospital, five departments (D1, D2, D3, D4, and D5) are set at five locations (1, 2, 3, 4, and 5), as shown in Figure 6.

In a single layer, the importance of each department is different, and the departments with more people flow are more important. On the contrary, the departments with less people flow are not more important, and the importance between the two departments is represented by the flow. The matrix F represents the flow between two departments, and the matrix L represents the distance between the two locations. The next step is to assign this group of departments to this group of locations to minimize the cost of patients between departments.

ILxBBO deals with continuous problems on which the search agents are represented by real values. To solve QAP with ILxBBO, it is necessary first for ILxBBO how to change from continuous optimization problems to discrete optimization problems. The changed method from [20] is adopted in this paper, and the method is explained as follows: the dimensionality of each solution represents the number of positions or facilities in QAP, and each solution represents the arrangement of different positions, corresponding to a location allocation scheme. The best allocation scheme is obtained through an IOA’s optimization process. Suppose a QAP problem has 10 facility points that need to be allocated to 10 location points, as shown in Figure 7(a). Facility 1 is assigned to Location 8, Facility 2 is assigned to Location 5, and Facility 3 is assigned to Location 1, and so on. QAP is considered as a permutation discrete problem, so when IOA is used to solve QAP, it uses the maximum real value to map the real value into the permutation sequence, as shown in Figure 7(b). That is, the maximum real value 13.26 corresponds to the minimum integer 1, the minimum real value 0.85 to the maximum integer 10, and so on.

ILxBBO is used to solve QAP, and it has two modifications: the first is that each random real solution or updated real solution is changed into a permutation sequence according to real value mapping as in Figure 7; the second is that equation (24) is considered as the objective function. The comparison algorithms include IPSO and IFA, they both come from [51], and they are improved algorithms of PSO and FA, respectively. They are good at solving QAP and therefore have strong comparability.

To be fair, according to the best recommendations of common parameters from [20], MaxNFEs is 100,000, and Run is 30. So the parameters of the 4 comparison algorithms are set as follows: Run is 30, N MaxDT = 100,000, N is 100, and MaxDT is 1000. Table 5 records Mean and Std of each algorithm on the benchmark data, and the data are from [20] and have been used as a standard set for solving QAP. Best-known refers to the best solution found so far.

From Table 5, on Mean, ILxBBO obtains the best results on all the 10 data. On Std, ILxBBO gets 0 on 7 (had12, had14, had16, scr12, tal13a, tal12b, and chr12b) of 10 data, and it indicates that ILxBBO has strong stability. On had12, had14, had16, scr12, tai12a, tai12b, and chr12b, the mean value of ILxBBO is the same as those of Best-known, but on scr15, tai15a, and chr12b, the mean value of ILxBBO is slightly less than that of Best-known. Generally, it shows that ILxBBO can solve QAP better than LxBBO can.

4.8. Wilcoxon Signed Rank Test Analysis

Wilcoxon signed rank test is a nonparametric test method [52], and we use it to test statistically the performance of ILxBBO compared with the comparison algorithms on the 30-dimensional functions, the 50-dimensional functions, and on solving QAP. The software is IBM SPSS Statistics 19. The data are taken from Tables 25.

The Wilcoxon signed rank test results are shown in Table 6, where R+ refers to the sum of ranks for the problems in which ILxBBO outperforms the comparison algorithm and R refers to the sum of ranks for the opposite. When ILxBBO and the comparison algorithm obtain the equal optimization performance, the corresponding ranks are split evenly to R+ and R [33]. The values can be computed according to the R+ and R values. “n//t/l” means the number of the benchmark problems is n, and ILxBBO wins on functions, ties on t functions, and loses on l functions. The following criterion is applied to compare the results:(1)When , the difference of both algorithms is not significant(2)When , the difference of both algorithms is significant

From Table 6, the value of is more than 0.05 for ILxBBO versus GLPSO, so ILxBBO is not significantly better than GLPSO. However, ILxBBO wins 13 compared with GLPSO and the two functions that are equal. That is because the mean is equal, and the variance of ILxBBO is less than the variance of GLPSO from Table 4. On the 30-dimensional functions, the optimization performance of ILxBBO is significantly better than those of TDBBO, BIBBO, BBOM, DEBBO, BLPSO, WRBBO, EMBBO, BHCS, YYPO, DPCABC, DFnABC, FMPSO, and HFPSO. On the 50-dimensional functions, the optimization performance of ILxBBO is significantly better than those of PRBBO, TDBBO, DEBBO, WRBBO, BHCS, and EMBBO. On QAP, ILxBBO is significantly better than LxBBO, IPSO, and IFA. Generally, the optimization performance of ILxBBO is better. The Wilcoxon signed rank tests show again that ILxBBO obtains better optimization performance and verify the previous conclusions.

5. Conclusions and Future Work

In this paper, an improved LxBBO (ILxBBO) is proposed to improve the optimization performance of LxBBO. These improvements are as follows: the mutation operator is deleted to simplify the search process and to reduce the computational complexity. A dynamic two-differential perturbing operator is proposed to update the first two best habitats so that the global search ability and the local one can be improved in the early search phase and in the late one, respectively. The worst habitat adopts a two-global-best guiding operator to improve the search ability. An improved Laplace migration operator is used for other habitats' updating to reduce some parameters setting and fasten the convergence speed. In addition, some approaches such as example learning selection instead of roulette wheel selection, greedy selection instead of elitism strategy, and so on are adopted to reduce the computation complexity and enhance the performance. In order to verify ILxBBO, a large number of experiments are made on the CEC-2013 test set. Experiment results verify ILxBBO outperforms quite a few state-of-the-art algorithms on most cases. Moreover, the obtained results indicate how effective ILxBBO is in solving QAP. In the future, the improved Laplace operator of this paper may be applied to other BBO variants and other IOAs, and it is expected to apply ILxBBO to other more engineering problems.

Data Availability

CEC-2013 databases used in this paper are publicly available for download and, in particular, can be accessed from http://www.rforge.net/cec2013/files/, whereas QAP databases can be downloaded from http://anjos.mgi.polymtl.ca/qaplib/inst.html.

Disclosure

This article does not contain any studies with human participants performed by any of the authors.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant U1904123, in part by the Natural Science Foundation of Henan Province under Grant 162300410177, and in part by the Key Research Program of Application Foundation and Advanced Technology of Henan Province under Grant 19A520026.