Abstract

Many optimization problems have become increasingly complex, which promotes researches on the improvement of different optimization algorithms. The monarch butterfly optimization (MBO) algorithm has proven to be an effective tool to solve various kinds of optimization problems. However, in the basic MBO algorithm, the search strategy easily falls into local optima, causing premature convergence and poor performance on many complex optimization problems. To solve the issues, this paper develops a novel MBO algorithm based on opposition-based learning (OBL) and random local perturbation (RLP). Firstly, the OBL method is introduced to generate the opposition-based population coming from the original population. By comparing the opposition-based population with the original population, the better individuals are selected and pass to the next generation, and then this process can efficiently prevent the MBO from falling into a local optimum. Secondly, a new RLP is defined and introduced to improve the migration operator. This operation shares the information of excellent individuals and is helpful for guiding some poor individuals toward the optimal solution. A greedy strategy is employed to replace the elitist strategy to eliminate setting the elitist parameter in the basic MBO, and it can reduce a sorting operation and enhance the computational efficiency. Finally, an OBL and RLP-based improved MBO (OPMBO) algorithm with its complexity analysis is developed, following on which many experiments on a series of different dimensional benchmark functions are performed and the OPMBO is applied to clustering optimization on several public data sets. Experimental results demonstrate that the proposed algorithm can achieve the great optimization performance compared with a few state-of-the-art algorithms in most of the test cases.

1. Introduction

Many real-world tasks, which can be transferred to optimization problems, have become increasingly complex and are difficult to solve using the traditional optimization algorithms [1]. Recently, a lot of nature-inspired metaheuristic algorithms have been proposed and applied to deal with various optimization problems [2]. Then, the researches on tackling by optimization techniques in many applications have become a fruitful field of research, especially those interested in solving global optimization problems. The swarm intelligence optimization (SIO) algorithm is a kind of bionic random method inspired by natural phenomena and biological behaviors and can deal with certain high-dimensional complex and variable optimization problems because of its better computing performance and simple model [3, 4].

Over the past several decades, SIO has become an attractive research area which leads to the emergence of a large variety of intelligent optimization algorithms. Kennedy and Eberhart [5] proposed a particle swarm optimization (PSO) algorithm derived from the simulation of bird foraging behaviors. However, the PSO often faces premature convergence problem, especially in multimodal problems as it may get stuck in specific point [6]. Wu and Yang [7] presented an elitist transposon quantum-based PSO to solve economic dispatch problems. Eusuff and Lansey [8] presented a shuffled frog-leaping algorithm (SFLA), which is inspired from the memetic evolution of frogs seeking food in a pond. It has been shown to be competitive with PSO, but the SFLA is good at exploration but poor at exploitation and easily gets trap in local optima when solving partial complex multimodal problems. Meanwhile, its convergence speed is slower [9]. Tan and Zhu [10] designed a fireworks algorithm (FWA) for the global optimization of complex functions. The FWA has powerful global optimization capabilities to solve classification problems, but there is no direct interaction among the solutions found during the optimization process of FWA; its convergence speed is slow and the computational cost is high [11]. Yin et al. [12] introduced a hybrid FWA-based parameter optimization into the nonlinear hypersonic vehicle dynamics control to satisfy the design requirements with high probability. Gandomi and Alavi [13] developed a krill herd algorithm (KHA) which mimics the herding behavior of ocean krill individuals. The herding of the krill individuals is a multiobjective process, and the position of an individual krill is time dependent. Unfortunately, the performance of KHA is degraded by the poor exploitation capability, and the basic KHA has a low convergence speed and accuracy [14]. Singh and Khan [15] proposed an artificial shark optimization (ASO) method to remove the limitation of existing algorithms for solving the economical operation problem of microgrid. Mirjalili et al. [16] established a grey wolf optimizer (GWO) metaheuristic based on grey wolves. The GWO algorithm is considered for learning method due to its advantages, including high accuracy, effectiveness, and competitiveness [17]. The paramount challenge in GWO is that it is prone to stagnation in local optima [18]. Differential evolution (DE) as a popular stochastic optimizer, proposed by Storn and Price [19], is to exhibit consistent and reliable performance in nonlinear and multimodal environment and has proven to be effective for constrained optimization problems [20]. Some empirical studies have shown that DE outperforms PSO [21]. However, setting different parameters has great impacts on the performances of DE algorithm when solving various global optimization problems or even the same problem at different evolutionary stages [22]. The fruit fly optimization algorithm (FOA) as a global optimization method was proposed by Pan [23], who was inspired by the foraging behavior of fruit flies. The FOA is simple in structure and easy to implement [24]. However, the basic FOA often derives a local extreme when solving high-dimensional functions and large-scale combinational optimization problems [25]. The idea of ant colony optimization (ACO) is to mimic the way that real ants find the shortest route between a food source and their nest. Recently, the ACO algorithm and its versions have been investigated to tackle combinatorial optimization problems [26]. But the efficiency of ACO is unsatisfactory since each ant needs to search for a complete solution, and the runtime is rather long [27]. Abedinia et al. [28] introduced a shark smell optimization (SSO) algorithm, which is applied for the solution of load frequency control problem in electrical power systems. The monarch butterfly optimization (MBO) algorithm was first presented by Wang et al. [29], and it simulates the migration behaviors of monarch butterflies in nature. Although most of these heuristic techniques have the ability to provide fast and efficient solution, sometimes they suffer from discovering global optimal solution, slow convergence rate, and several parameters tuning [30]. Until now, the MBO algorithm has become one of the most widely used SIO algorithms, and it has two important operators, the migration operator and the butterfly adjusting operator [31]. The former provides a certain local search capability and the latter gives a global search capability. The search direction of monarch butterflies is mainly determined by the migration operator and the butterfly adjusting operator in MBO. Since the migration operator and the butterfly adjusting operator can be implemented simultaneously, MBO is ideally suited for parallel processing and is capable of making trade-offs between intensification and diversification [32]. In addition, the MBO algorithm has simple calculation process, requires less computational parameters, and is easy to implement by a program. Furthermore, some advantages of MBO are incomparable to many other intelligent optimization algorithms. Therefore, the MBO algorithm and its versions have been widely used in many fields, such as dynamic vehicle routing problem [30], 0-1 knapsack problem [31], neural network straining [32], optimal power flow problem [33], and prevention of osteoporosis [34].

In the last few years, in order to improve the performance of MBO, the scholars have made many improvements. In the basic MBO algorithm, after implementing the migration operator, the generated monarch butterfly will be accepted as a new monarch butterfly in the next generation regardless of whether it is better or worse. Then, Hu et al. [35] used self-adaptive and greedy strategies to improve the performance of the basic MBO. However, it suffers greatly from worse standard deviations and average fitness on some benchmarks. Wang et al. [36] developed a different version of MBO with greedy strategy and self-adaptive crossover operator (GCMBO), in which the greedy strategy can accelerate convergent speed and the self-adaptive crossover operator can significantly improve the diversity of population at later run phase of the search. Feng et al. [37] proposed a chaotic MBO algorithm, in which the chaos theory was employed to enhance its global optimization ability. Feng et al. [38] introduced neighborhood mutation with crowding and Gaussian perturbation into MBO algorithm, in which the first strategy enhances the global search ability, while the second strategy strengthens local search ability and prevents premature convergence during the evolution process. At present, the MBO is usually combined with other SIO methods to improve the optimization performance. The main objective is to improve the balance between the characteristics of exploration and exploitation in those algorithms in order to address the issues of trapping in local optimal solution, slow convergence, and low accuracy in complex optimization problems [39]. Ghanem and Jantan [40] presented a metaheuristic algorithm that combined artificial bee colony optimization with the MBO. Ghetas et al. [41] introduced the harmony search algorithm into MBO to enhance the search ability of MBO, in which mutation operators were added to the process of adjusting operator to enhance the exploitation and exploration ability and speed up the convergence rate of MBO. Strumberger et al. [42] incorporated the search mechanism of firefly algorithm (FA) into MBO to overcome this deficiency that in early iterations exceedingly directs the search process toward the current best solution in MBO. However, most of the abovementioned MBO algorithms still easily fall into local optima and are rather slow in convergence. This inspires the authors to investigate new nature-inspired optimization algorithm about MBO.

Based on the above analyses of MBO, it is clear that falling into local optima easily is one of the typical disadvantages of MBO, and there are many ways to improve this drawback. The OBL method, proposed by Tizhoosh [43], is one of the most effective methods. It can prevent the algorithm from falling into local optima to some degree. For example, Shang et al. [44] introduced OBL, dynamic inertia weight, and a postprocedure to improve PSO with mutual information as its fitness function to detect SNP-SNP interactions, in which OBL enhances the global explorative ability. Since the poor exploration capabilities of SFLA sometimes get trapped in local optima, which results in poor convergence, Sharma and Pant [45] embedded the OBL into the memeplexes before the frog initiates foraging, which enhances the local search mechanism of SFLA but also improves the diversity. Ahandani and Alavi-Rad [46] used new versions of the SFLA which on the one hand employed the OBL to accelerate the SFLA without making premature convergence and on the other hand used the OBL strategy to diversify search moves of SFLA. Yu et al. [47], inspired by the OBL, improved the performance of the FA, in which the worst firefly is forced to escape from the normal path after OBL operation and can help it to escape from local optima. Yang et al. [48] presented an improved artificial bee colony algorithm based on OBL to overcome the shortcomings of the slow convergence rate and sinking into local optima. Shan et al. [49] embedded the OBL into the bat algorithm to enhance the diversity and convergence capability. Park and Lee [50] combined differential evolution with OBL to obtain a high-quality solution with low-computational effort. Kumar and Sahoo [51] presented a cat swarm optimization (CSO) algorithm via the OBL, which can enhance the diversity of the algorithm. Sarkhel et al. [52] applied OBL to the harmony search algorithm to overcome slow convergence to the globally optimal solutions. Zhang et al. [53] merged the OBL into the biogeography-based optimization algorithm to prevent the algorithm from falling into the local optima. Zhang et al. [54] added the OBL to the GWO (OGWO) to prevent the algorithm from falling into the local optima. In recent years, the OBL has become a widely used technique in optimization algorithms. It is noted that the OBL can increase population diversity and enhance global search ability [55]. If the current best candidate solution falls into the local optima, it may mislead the other candidate solutions into the local optima. However, its opposite is often far from the local optima. Therefore, this paper introduces the OBL into the initial phase of MBO, effectively avoiding the algorithm falling into the local optimum. Furthermore, the opposition-based individuals are generated by the OBL such that the best individual can be accepted. This operation can efficiently prevent the algorithm from falling into the local optima to some extent.

What is more, there exists much insufficiency for MBO about its solution search mechanism which may bring the premature convergence and the low search accuracy when solving complex optimization problems [49]. Then, considering that MBO converges very slowly, a perturbation operator strategy can be used to ensure the diversity of monarch butterfly against the premature convergence. For example, Liu et al. [9] designed a perturbation operator strategy in a convergence state to help the best frog to jump out of possible local optima to further increase the performance of SFLA. Wang et al. [56] proposed an improved FOA using swarm collaboration and random perturbation strategy to enhance the performance. Li et al. [57] developed an artificial bee colony algorithm with random perturbations for numerical optimization, in which the self-adaptive population perturbation strategy for the current colony is used by random perturbation to enhance the population diversity. Yu et al. [58] presented a teaching-learning-based optimization with a chaotic perturbation mechanism, which produces many solutions around the current best solution and thereby enhances the searching ability and global convergence. Li et al. [59] utilized the uniformity of Anderson chaotic mapping and performed chaos perturbation to part of particles based on the information of variance of the population’s fitness to avoid the untimely aggregation of particle swarm. Based on the ideas of random perturbation, a novel RLP operator is proposed to prevent premature convergence in this paper, and merged into the migration operator. The improved migration operator with RLP shares the information of excellent individuals, which is conducive to guiding individuals to approach an optimal solution and accelerate convergence, and the works are not considered in previous MBO.

The remainder of this paper is organized as follows: Section 2 reviews some related theory of the MBO algorithm. In Section 3, the OBL method and RLP-based migration operator are investigated, and the main procedure of improved MBO and its complexity analysis are given. Section 4 describes the experimental results and analysis. Finally, the conclusion is summarized in Section 5.

The theory of the MBO algorithm can be found in [29, 36]. In MBO, all monarch butterfly individuals are idealized and located in only two lands as follows: the northern United States and southern Canada (Land1), and Mexico (Land2). Then, the location of monarch butterflies is updated in two ways, namely, the migration operator and the butterfly adjusting operator. Firstly, the offspring are generated (location update) through the migration operator. Secondly, the location of other monarch butterflies is updated by the butterfly adjusting operator. Thus, the search direction of the monarch butterfly individual is determined by the migration operator and the butterfly adjusting operator. Moreover, the two operators can be performed simultaneously. Therefore, the MBO algorithm is suitable for parallel processing, and it has a good balance of strengthening and diversification. The MBO algorithm abides by the following ideal rules:

All the monarch butterflies are located only in Land1 and Land2. Namely, the population of the entire monarch butterflies is composed by the monarch butterflies in Land1 and Land2.

The offspring of each monarch butterfly are generated only by the migration operator in Land1 or Land2.

To keep the population constant, once a descendant monarch butterfly is produced, a corresponding parent monarch butterfly will disappear.

Monarch butterfly individuals with the best fitness automatically enter the next generation without any operation, and then it ensures that the quality of the monarch butterfly population does not decline as the number of iterations increases.

The MBO algorithm contains two important operators, which are described as follows.

The first operator is the migration operator, whose purpose is to update the migration of the monarch butterflies between Land1 and Land2. The total number of monarch butterflies is NP, and the numbers of monarch butterflies in Land1 and Land2 are NP1= ceil and , respectively, where p is the migration rate of monarch butterflies with p = in MBO, ceil(x) rounds x to the nearest integer greater than or equal to x, the subpopulation of Land1 is denoted as Subpopulation1, and the subpopulation of Land2 is denoted as Subpopulation2. Then, the migration operator is expressed aswhere is the kth element of xi in generation t + 1; similarly, denotes the kth element of in generation t, and is the kth element of in generation t; the current generation number is t, and the monarch butterflies r1 and r2 are randomly selected from Subpopulation1 and Subpopulation2, respectively. Here, r is calculated by , where peri is the migration period, which is equal to 1. 2 in MBO and rand is a random number in .

The second operator is the butterfly adjusting operator, which is used to update the position of monarch butterfly in Subpopulation2. The formula is described aswhere is the kth element of xj in generation t+1; similarly is the kth element of in generation t, which is the best location for monarch butterflies in Land1 and Land2, is the kth element of in generation t, the monarch butterfly r3 is randomly selected from Subpopulation2, and BAR is the adjustment rate. If BAR is less than the random number rand, the kth element of xj at t + 1 is updated, where α is the weighting factor, and , where Smax is the maximum walk step. In (2), dx is the walk step of butterflies j that can be calculated by the Levy flight such that dx = Levy().

3. Improved MBO Algorithm Based on OBL and RLP

3.1. Motive of Improving MBO Algorithm

In the MBO algorithm, the migration operator and the butterfly adjusting operator can ensure the search direction of monarch butterflies. Furthermore, the migration operator and the butterfly adjusting operator can be executed simultaneously. The advantages of MBO algorithm include its simplicity and easy implementation. However, the drawbacks of MBO algorithm cause poor optimization efficiency in solving complex optimization problems, which are mainly described from two aspects as follows: First, from (1), the monarch butterflies r1 and r2 are randomly selected from Subpopulation1 and Subpopulation2, respectively. A worse monarch butterfly may be selected to share its features with a better one, leading to the population degenerating. Second, from the main process of MBO, when the elitist strategy is adopted, the population must be sorted twice during each generation, thus causing high time complexity. Thus, in order to overcome the above drawbacks and improve the optimization efficiency of MBO, several creative improvements are developed in this paper.

3.2. Opposition-Based Learning Method

In the fields of computational intelligence, the OBL is usually used to improve the convergence rate of many optimization algorithms. Its main idea is to take into account the current population as well as its opposite population at the same time and further obtain better candidate solution [55]. In recent years, the scholars have applied the OBL method in population-based optimization technique to enhance the convergence rate. It can be concluded that an opposite candidate solution has a better chance to be closer to the global optimum solution than a random candidate solution [51]. The opposite solution of OBL is denoted by the mirror point of the solution from the center of the search space, and then its formula can be mathematically expressed aswhere is a feasible solution in a D-dimensional search space, ,  , and its opposition-based solution is .

Note that if the OBL approach is introduced into the initialization of the MBO algorithm, it can produce the opposition-based population. Then, the better individuals are selected to participate in the evolution from the union of the original populations and the opposition-based populations. Thus, this operation increases the population diversity and expands the exploration scope of MBO.

3.3. Random Local Perturbation-Based Migration Operator

To overcome the shortcoming of the premature convergence of MBO, a novel RLP is constructed and merged into the migration operator of the MBO. For this, the RLP strategy can be defined aswhere is an optimal solution in generation t, is a suboptimal solution in generation t, is the kth element of the ith individual in generation t + 1, is the dth element of the optimal solution in generation t, is the dth element of the suboptimal solution in generation t, and d can be calculated by .

Equation (4) shares the information of the optimal solution and the suboptimal solution, which is conducive to guiding the current individual to move toward the optimal solution and the suboptimal solution. Then, the convergence speed can be effectively accelerated. Meanwhile, to maintain the diversity of the MBO search, a control parameter R is set as R = 0. 5, where R = 0. 5 is determined through many experiments, and a random number r from 0 to 1 is generated. When r<R, the location updating is performed according to (4); otherwise the location updating is performed according to (1).

For the basic MBO, the parameters of the elitist strategy need to be set. In each generation, the population will be sorted twice, which brings about much computation complexity. If the greedy selection method is adopted in MBO, the population at each generation is just sorted once. Thus, the elitist strategy can be replaced by the greedy selection method in the SIO algorithms [53]. Hence, during each generation, the new generated monarch butterflies are compared with the corresponding old ones, and the better one is selected. So, this replacement eliminates the elitist parameters, gets rid of a sorting, and further improves the operation efficiency. It follows that the greedy strategy is introduced into the improved migration operator with RLP, and the superior candidate solution is retained by the principle of survival of the fittest. Here, the greedy strategy can be expressed aswhere is the generation t + 1 of new monarch butterfly individuals, and and represent the fitness values of two monarch butterflies and , respectively.

The special steps of the improved migration operator with RLP are described in Algorithm 1.

for   = 1 to   do
for   = 1 to   do
R = 0. 5.
Set as random number in .
if    then
Calculate by Eq. (4).
else
Calculate by Eq. (1).
end if
end for
Calculate with greedy strategy by Eq. (5).
end for

For Algorithm 1, the improved migration operator with RLP shows that our proposed method with sharing information can make full use of the information of the high-quality individuals in the current population, and improve the local optimization ability. Moreover, the greedy strategy only retains individuals who have a better fitness, which efficiently enhances the convergence rate.

3.4. Main Procedure of Improved MBO Algorithm

All the above improvements can enhance the optimization performance of the MBO algorithm. The main process of OBL and RLP-based improved MBO (OPMBO) algorithm can be illustrated in Figure 1. The special steps of the OPMBO algorithm are provided in Algorithm 2.

Step 1. Set the population quantity , the maximum generation , the dimensions , the max walk step size ,
the adjusting rate , the migration period and the migration rate . Let the current cycle counter = 1.
//Initialization operation
Step 2. Generate the opposition-based population according to OBL. Select the individuals with better fitness to enter
the next generation from the original and opposition-based populations.
Step 3. Calculate their fitness values according to the location of each monarch butterfly. //Fitness evaluation
Step 4.  While    do
Sort the population according to monarch butterfly fitness using Quicksort algorithm in [31].
Divide the monarch butterfly population into two subpopulations, i.e., Subpopulation1 and Subpopulation2.
for   = 1 to   do
Update Subpopulation1 using Algorithm 1.
end for
for   = 1 to   do
Update Subpopulation2 by Eq. (2).
end for
Merge two new subpopulations into a new population.
Recalculate the fitness values of each monarch butterfly according to the updated position.
Let .
Step 5. end while
Step 6. Output the optimal values.
3.5. Complexity Analysis

Under the same software and hardware on all systems, the computational complexity of the optimization algorithm is mainly composed of two parts as follows: one is the complexity of the objective function, and the other is the complexity of the algorithm process. In the comparison experiment, six kinds of SIO algorithms, namely, the MBO algorithm [29], the GCMBO algorithm [36], the OPMBO algorithm, the FOA based on hybrid location information exchange mechanism (HFOA) [60], the GWO algorithm [16], and the OGWO algorithm [54], have the same population number and maximum number of iterations, so that their maximum function evaluation times are equal. Thus, the complexity of OPMBO algorithm mainly focuses on its operation process. For the OPMBO algorithm, the time complexity is polynomial. Assume that the maximum number of iterations of the OPMBO is MaxGen, the population size is NP, the Subpopulation1 is NP1, the Subpopulation2 is NP2 with NP2 = NP – NP1, and the dimension is D. In an effort to avoid confusion and awkward phrasing, MaxGen is replaced by T, NP is replaced by N, and NP1 is replaced by N1. According to Figure 1, the time complexity of the algorithm is mainly determined by each iteration cycle. The detailed analysis of time complexity for OPMBO is as follows: The first step is to calculate the fitness value of the monarch butterflies, and then the time complexity is O(N). The second step is sorting, and the time complexity of Quicksort algorithm in [31] is . The third step is to divide the population into two subpopulations, and the time complexity is O(N). The fourth step is to firstly run the improved migration operator, which has two inner loops, and then the time complexity is . And secondly for the butterfly adjusting operator, there are two inner loops, whose time complexity is . Therefore, the total time complexity of OPMBO algorithm is = = . In the OPMBO algorithm, since the variable storage space is affected by the population size N and the variable dimension D, the space complexity can be calculate by .

4. Experimental Results and Analysis

4.1. Experiment Preparation

To verify the optimization performance of OPMBO, a series of experiments are performed on various benchmark functions. The test of benchmark functions is a common and popular method to verify the performance of intelligent algorithms. For example, Wang et al. [29] introduced 38 benchmark functions to demonstrate the superior performance of the MBO algorithm, and the results clearly exhibit the capability of the MBO toward finding the enhanced function values on most of the benchmark problems. Wang et al. [36] employed 18 benchmark functions to test the GCMBO algorithm, and the results indicate that GCMBO significantly outperforms the basic MBO method on almost all the test cases. Zhang et al. [53] marked 21 benchmark functions to verify the efficiency of biogeography-based optimization algorithm. Zhang et al. [54] used 30 benchmark functions to illustrate the performance of hybrid algorithm based on biogeography-based optimization and GWO. In our experiments, the typical 12 benchmark functions are selected from [29, 36, 53, 54] to test the performance of our OPMBO algorithm, which can be rigorous to verify the effectiveness of all of the compared algorithms. The information for 12 benchmark functions is shown in Table 1, where the Fmin is the minimum value (ideal optimal value) of the function. These benchmark functions can be classified into two different types, unimodal functions f1-f7 and multimodal functions f8-f12.

The experiments were performed on a personal computer running Windows 7 with an Intel(R) Core(TM)CPU operating at 3.10 GHz and 4 GB memory. All the simulation experiments were implemented in MATLAB R2014a.

4.2. Comparison of OPMBO with MBO and GCMBO on Different Dimensions

The objective of the following experiments is to show the comparison results of 12 benchmark functions on different dimensions. The two state-of-the-art algorithms, the MBO [29] and the GCMBO [36], are selected as the comparison algorithms to evaluate the effectiveness of the OPMBO. Following the experimental techniques developed by Wang et al. [36] and Zhang et al. [53], the three dimensions of the functions are set as follows: the low-dimensional (20 dimensions), the medium-dimensional (50 dimensions), and the high-dimensional (100 dimensions). As the dimension increases, the difficulty of the problem increases, which verifies that OPMBO has the ability to handle the complex optimization problems. Then, the optimization experiments of the three algorithms (MBO, GCMBO, and OPMBO) are performed on the three different dimensions for the 12 benchmark functions to verify the optimization performance of the OPMBO. The related parameter values for testing the three algorithms are shown in Table 2. Following the experimental techniques designed by Wang et al. [29], the D describes the dimension, the MaxGen is the maximum number of iterations, the NP describes the number of monarch butterfly population, and the Num represents the independent running times of each optimization problem. It is known that along with the increase of the dimension of the benchmark function, the maximum number of iterations MaxGen will increase. The monarch butterfly population of the three algorithms is uniformly set to 50, and the other parameters are the same as in [29, 36]. In order to reduce the random error, each method is run 30 times independently for each optimization problem, and the results are the average value of 30 time evaluations. The experimental results for different dimensions (20, 50, and 100 dimensions) on 12 benchmark functions are listed in Tables 35, respectively, where the best values are in bold font. Following the experimental techniques designed in [29, 36, 53, 54, 60, 61], the optimal value as Best, the worst value as Worst, the mean value as Mean, and the standard deviation as Std of the fitness values of benchmark functions for the 30 independent experimental results are employed to test the MBO, GCMBO, and OPMBO algorithms. Here, Zhang et al. [53] and Wang et al. [36] declared that the lower Mean and Std values indicate a better algorithm with respect to search ability and stability in their experimental analysis.

The first part of this experiment is conducted on the 20 dimensions, and the results are illustrated in Table 3. It can be seen from Table 3 that on f1-f4, f8-f10, and f12, the OPMBO algorithm achieves the best optimization results on the Best, Worst, Mean, and Std values. In particular, the theoretical optimal value is obtained by OPMBO on f12. On f5 and f7, although the Best value of OPMBO is not the best, it has achieved the best optimization results on the Worst, Mean, and Std values. The Best value of GCMBO is the same as that of OPMBO, but OPMBO achieves the best results for the other values on f11. On f6, the three algorithms achieve the best value, but the Mean and Std values of OPMBO are better. Therefore, the experimental results on the 20-dimensional benchmark function show that the performance of OPMBO is excellent for the low-dimensional functions.

In what follows, this portion of our experiment is to be performed on the 50 dimensions, and the results are shown in Table 4. The values of OPMBO are not as well as that of GCMBO on f1. However, the OPMBO algorithm achieves the best results on f2-f4, f8, and f9. Both the Mean and Std values of OPMBO are optimal on f5 and f10-f12. As the dimensions increase, the optimization performances of MBO and GCMBO decrease severely, whereas OPMBO does not decrease on f6. To more intuitively demonstrate the convergence speed and the local and global search ability of the three algorithms, the convergence curves of the three algorithms of the experimental results on the 50-dimensional functions in Table 4 are shown in Figure 2.

Figure 2 shows that for f1, the convergence rate of OPMBO is better than those of MBO and GCMBO in the initial iterations, whereas the convergence of OPMBO is slower than that of GCMBO in later iterations. Overall, both GCMBO and OPMBO have a much better effect than MBO on f1. The convergence speed of OPMBO is clearly faster than those of MBO and GCMBO on f2-f11. For f2, when iterating to 200 times, the MBO and GCMBO algorithms have stopped, but the OPMBO has been updating the search. So, the convergence of OPMBO is faster. For f3, although the convergence rate of GCMBO is the best at the beginning, the convergence of OPMBO gradually shows an optimal trend as the iteration progresses. For f4, the convergence curves of MBO and GCMBO are basically the same, but the curve of OPMBO shows obviously a faster convergence speed. For f5, from the iteration of 100 times, both MBO and GCMBO are caught in the search stagnation, but the OPMBO gradually has a very fast speed in convergence. For f6, combining the convergence graph with Table 4, the OPMBO has obtained an ideal optimal value when iterating 100 times. For f8, although the OPMBO appears to stagnate at 100 iterations, the OPMBO jumps out of the local optimum at 200 iterations. The reason is that the migration operator with RLP of OPMBO makes the algorithm jump out of local optimum to some extent. For f9-f10, the convergence of OPMBO is much better than those of MBO and GCMBO. Though the convergence of OPMBO is better than MBO and GCMBO on f11 and f7, it is not as well as f10. Thus, the convergence speed of OPMBO is significantly faster than those of the other algorithms, especially on f4-f6 and f9. The convergence of GCMBO is better than those of MBO and OPMBO in early iterations, whereas the convergence of OPMBO is faster than those of the other algorithms in the later iterations on f12. As a result, the OPMBO algorithm outperforms the MBO and GCMBO algorithms from the convergence curves. In conclusion, the performance of OPMBO on the medium-dimensional benchmark function is excellent for the experimental results in Table 4.

The third part of this experiment is to be carried out on the 100 dimensions, and the results are demonstrated in Table 5. The OPMBO algorithm achieves the excellent performance from Table 5. Although OPMBO is not as good as GCMBO for the Std on f11, it is the best in terms of both the Mean and Std values on the other functions. Most importantly, the performance of OPMBO does not decrease with increasing dimensions on f6. Therefore, OPMBO can obtain the best optimization performance on the high-dimensional.

From Tables 35, the GCMBO and OPMBO algorithms have similar results in three different dimensions on the identical benchmark functions, and both are better than MBO on f1. For f2, the results of OPMBO in all three dimensions are the best. In particular, there is no decline in optimization performance as the dimension rises in the 20-dimensional and 50-dimensional functions. On f3 and f4, although the OPMBO has achieved the best results, the gap between them is not obvious. The results of OPMBO are superior to the other two algorithms, especially in the 20-dimensional and 50-dimensional functions on f5. It is worth mentioning that OPMBO has achieved an ideal optimal value of f6 in three dimensions, which can prove the excellent performance of OPMBO. On f8, the Mean of OPMBO in 20 dimensions is 7.8430e-14, the Mean in 50 dimensions is 1.1863e-13, and the Mean in 100 dimensions is 2.9010e-12. It follows that the optimization performance of OPMBO does not decrease as the dimension increases. So, the results of the OPMBO are much better than the other two algorithms, regardless of the three dimensions on f9 and f10. On f11, OPMBO is better than the other two algorithms, except for 100 dimensions. Furthermore, the Mean and Std values of OPMBO are the best in three dimensions on f7 and f12. Therefore, the results show that the performance of OPMBO is more outstanding than the other two algorithms on most of the 12 benchmark functions under three different dimensions.

According to the above experimental results and analysis, it can be proven that OPMBO not only has great convergence accuracy but also shows better global search capabilities on multimodal functions of three different dimensions, and it is superior to the contrast algorithms with relatively obvious advantages. Since the OPMBO embeds the OBL, the diversity of population has been enhanced and the global exploration ability has been improved. Moreover, the RLP is added to the migration operator such that the information of the better individuals in the population is applied effectively, and the convergence rate of the OPMBO algorithm is greatly improved. In summary, OPMBO has better optimization performance than the other two algorithms whether it is on the unimodal function or the multimodal function. Therefore, all the experimental results show that the OPMBO algorithm is effective and feasible.

4.3. Two-Tailed t-Test of the Experimental Results on 100-Dimensional Benchmark Functions

For the experimental results of the 100 dimensions on the 12 benchmark functions in Table 5, following the experimental techniques designed by Zhang et al. [61], Table 6 shows the values of t and p for a two-tailed t-test with a 5% level of significance between the OPMBO algorithm and the other optimization algorithms (MBO and GCMBO), in which ‘‘Better’’, ‘‘Equal’’, and ‘‘Worse’’ indicate that OPMBO is better than, equal to, or worse than the compared methods in this case, respectively.

Table 6 shows that OPMBO is significantly better than MBO and GCMBO at a 95% confidence level, which demonstrates the excellent performance of the OPMBO algorithm.

4.4. Comparison of OPMBO with HFOA on 30 Dimensions and Population Size of 50

This portion of our experiments further concerns the optimization performance of OPMBO on the 30 dimensions and the population size of 50. The OPMBO algorithm is compared with the HFOA algorithm in [60]. Both algorithms have the 30 dimensions on the representative 5 benchmark functions selected from Table 1, and the population size is set as 50. Following the experimental techniques in [60], the results are shown in Table 7, where the bold font indicates the best.

Both HFOA and OPMBO obtain a theoretical optimal value on f6. However, on the other 4 benchmark functions, OPMBO obviously outperforms the HFOA on the Mean and Std values. Hence, the results indicate that the optimization ability of OPMBO is strong. In summary, the OPMBO algorithm outperforms the HFOA algorithm on 30 dimensions and population size of 50.

4.5. Comparison of OPMBO with GWO and OGWO on 30 Dimensions and Population Size of 20

To further illustrate the optimization performance of the OPMBO algorithm, similar to Section 4.4, the OPMBO is compared with the GWO algorithm [16] and the OGWO algorithm [54]. Both OPMBO and OGWO use the OBL method and have strong comparability. Three algorithms have the 30 dimensions on the representative 5 benchmark functions selected from Table 1, and the population size is 20. Following the experimental techniques designed by Mirjalili et al. [16] and Zhang et al. [54], the experimental results are demonstrated in Table 8, where the best values are in bold font.

Table 8 shows that although the results of OPMBO are not as well as those of GWO and OGWO on f1, f4, and f7, OPMBO is better than GWO and OGWO on f2 and f8. In conclusion, the optimization performance of OPMBO is better than those of GWO and OGWO on benchmark functions.

4.6. Comparison of Clustering Optimization with Six SIO Algorithms

Clustering is a way of the grouping of objects or data according to similar criteria, which is done by a group of data items close to each other based on several criteria [62]. Data clustering refers to maximizing the similarity of the members in a group (or cluster) as much as possible and minimizing the similarity of the members in two different groups as much as possible [53]. One of the well-known techniques to handle the clustering problem is to convert the clustering problem into an optimization problem. Then, the clustering problem can be solved by any optimization algorithm [63]. That is, clustering itself can be stated as an optimization problem, so it can be solved by the SIO algorithms [54]. Many scholars have been devoted to solving clustering problem using SIO techniques [6270].

For a data space, let X be the data set with n number of objects or patterns, and each object is of m dimension, where , and . Then, a clustering C can be represented as K clusters , such that it must satisfy the following conditions:

(1) , where Gi refers to the ith cluster.

(2) , where .

(3) , where ,  .

(4) sim(X1, X2) > sim(X1, Y1), where and ,  .

It is known that the clustering can be achieved by various well-known similarity measures. In this experiment, when the numeric data is analyzed, the Euclidean distance is introduced to measure the similarity degree between two objects. Since each individual of OPMBO is viewed as a candidate solution for the clustering optimization problem, a fitness value of the individual is used as an objective function value of the corresponding candidate solution. What is more, when the solution minimizes the objective function value, the best can be achieved. Then, the formula of the objective function [53] is described aswhere Zk represents the kth clustering center, and indicates the Euclidean distance between two m dimensional objects Xi and Zk.

In Table 9, 13 data sets are adopted to test the clustering optimization performance of our proposed algorithm. The specifications of the 13 data sets are shown in Table 9, in which Mydata can be downloaded at http://yarpiz.com/64/ypml101-evolutionary-clustering, and the other 12 data sets are taken from UCI Machine Learning Repository, which can be downloaded at http://archive.ics.uci.edu/ml/datasets.htm.

The first section of this experiment is to illustrate the efficiency of clustering optimization on the selected 10 data sets from Table 9 with three MBO algorithms, which include the MBO, GCMBO, and OPMBO algorithms. The common parameter values of all three algorithms are set as follows: the population size N is 50, the maximum number of iterations MaxGen is 200, and the independent run number on each data set is 30. Following the experimental techniques designed in [29, 36], the Mean indicates the average minimum distance between each spatial point and the center of each cluster, the Std describes the standard deviation value, and then the comparison results are shown in Table 10, where the bold font indicates the best.

From Table 10, the OPMBO obtains the best Mean and Std values on all the 10 data sets. The Mean values of OPMBO are about nearly 5-6% larger than GCMBO and nearly 4-7% on the Heart, Liver Disorders, and Statlog data sets, respectively. However, the different values of Std with the OPMBO, MBO, and GCMBO algorithms on the three data sets are approximately 1%. Although the OPMBO achieves the largest values of Mean and Std on the other seven data sets, it is obvious that the values of OPMBO compared with MBO and GCMBO are very close.

The following part of this experiment further concerns the clustering optimization performance on the selected three data sets from Table 9 with the four kinds of different optimization algorithms, which include the OPMBO, the PSO algorithm [5], the FA algorithm [42, 69], and the cuckoo search (CS) algorithm [70]. The related parameter values of OPMBO are the same as [5, 69, 70], where the population size N is 40, the maximum number of iterations MaxGen is 200, and the independent run number on each data set is 20. Following the experimental techniques designed in [5, 42, 69, 70], similar to Section 4.2, the Best, Worst, Mean, and Std of the fitness values can be obtained. The comparison results are illustrated in Table 11, where the bold font indicates the best.

It can be observed from Table 11 that OPMBO exhibits the largest values of Best, Worst, Mean, and Std on the Wine and Glass data sets, except for Iris. For Wine, the values of Best, Worst, Mean, and Std achieved by the PSO, FA, and CS algorithms are 1000 times larger than those of the OPMBO algorithm. For Glass, the values of Best, Worst, and Mean achieved by the FA are 400-450 larger than those of OPMBO. However, the value of Std of OPMBO is close to that of FA. Meanwhile, the value of Std of the PSO is nearly 100 larger than that of OPMBO. For Iris, the value of Std of the OPMBO is 3-8 less than that of PSO, FA, and CS. However, the values of Best, Worst, and Mean obtained by the OPMBO are 40-80 larger than those of PSO, FA, and CS. The reason is that the Iris data set may have too few attributes so that the OPMBO algorithm performs very poorly in Mean. In general, the OPMBO algorithm can obtain satisfactory results in solving the clustering optimization problems.

5. Conclusion and Future Work

Recently, SIO algorithms have been widely employed for solving complex optimization problems. MBO as one of the most promising SIO methods is proposed to tackle global optimization problems. In order to improve the optimization efficiency of MBO algorithm and obtain an algorithm with strong universal applicability, this paper introduces OBL into MBO and proposes RLP to improved basic MBO algorithm. The improvements are mainly from two aspects, one is to enhance the optimization performance and the other is to reduce the computation complexity. The OBL firstly prevents the algorithm from falling into the local optima to some degree. A new RLP is merged into the migration operator, which can enhance the local search ability and accelerate the convergence speed. Then, the greedy strategy is used instead of the elitist strategy, and it can decrease the setting of elite parameters and eliminate a sorting operation. Finally, the OPMBO algorithm is proposed, and to verify the optimization performance of OPMBO, a series of experiments are performed on 12 benchmark functions and 13 public data sets. The results verify that our OPMBO algorithm typically outperforms the other algorithms considered. Based on the comparison and analysis of our scheme with other schemes, the contribution of our proposed method can be summarized as follows:

The OBL as a widely used technique in optimization algorithms is introduced into the MBO algorithm, an opposite candidate solution generated by OBL has a better chance to be closer to the global optimum solution than a random candidate solution, and then this process can efficiently avoid the MBO from falling into a local optimum.

The RLP is proposed to merge into the migration operator, which shares the information of the optimal solution and the suboptimal solution, and it is helpful to guide the current individual to move toward the optimal solution and the suboptimal solution. Then, the premature convergence of MBO can be eliminated effectively.

A greedy strategy is introduced into the improved migration operator with RLP, and the superior candidate solution is retained by the principle of survival of the fittest. This operation eliminates the elitist parameters, gets rid of a sorting, and further improves the operation efficiency.

Since each individual of OPMBO is viewed as a candidate solution for clustering optimization problem, a fitness value of the individual is used as an objective function value of the corresponding candidate solution. When the solution minimizes the objective function value, the best can be achieved. Then, the OPMBO algorithm can be applied to solve clustering optimization on public data sets.

It is well-known that there is not any SIO algorithm that can effectively solve all optimization problems, and then in our experiments the OPMBO algorithm cannot obtain the satisfactory results on some benchmark functions. In our future work, some interesting problems can be further studied. We intend to hybridize MBO with the latest SIO methods, further improve the OPMBO algorithm, and use it to resolve multiobjective optimization problems. Furthermore, the OPMBO will be applied to other engineering fields.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (Grants 61772176, 61402153, 61472042, and 11601130), the China Postdoctoral Science Foundation (Grant 2016M602247), the Plan for Scientific Innovation Talent of Henan Province (Grant 184100510003), the Key Project of Science and Technology Department of Henan Province (Grants 182102210362, 182102210078), the Young Scholar Program of Henan Province (Grant 2017GGJS041), the Key Scientific and Technological Project of Xinxiang City (Grant CXGG17002), the Ph.D. Research Foundation of Henan Normal University (Grants qd15132, qd15129), the Natural Science Foundation of Henan Province (Grants 182300410130, 182300410368), and the Natural Science Project of Henan Province Education Department (Grant 17A520039).