Abstract

The backtracking search optimization algorithm (BSA) is a recently proposed evolutionary algorithm with simple structure and well global exploration capability, which has been widely used to solve optimization problems. However, the exploitation capability of the BSA is poor. This paper proposes a deep-mining backtracking search optimization algorithm guided by collective wisdom (MBSAgC) to improve its performance. The proposed algorithm develops two learning mechanisms, i.e., a novel topological opposition-based learning operator and a linear combination strategy, by deeply mining the winner-tendency of collective wisdom. The topological opposition-based learning operator guides MBSAgC to search the vertices in a hypercube about the best individual. The linear combination strategy contains a difference vector guiding individuals learning from the best individual. In addition, in order to balance the overall performance, MBSAgC simulates the clusterity-tendency strategy of collective wisdom to develop another difference vector in the above linear combination strategy. The vector guides individuals to learn from the mean value of the current generation. The performance of MBSAgC is tested on CEC2005 benchmark functions (including 10-dimension and 30-dimension), CEC2014 benchmark functions, and a test suite composed of five engineering design problems. The experimental results of MBSAgC are very competitive compared with those of the original BSA and state-of-the-art algorithms.

1. Introduction

In the real world, there are many complex problems that cannot be solved by traditional optimization algorithms. The emergence of global optimization algorithms overcomes the shortcomings of traditional optimization techniques to some extent. Global optimization algorithms mainly include two categories: one is swarm intelligence algorithms (SIs), and the other is evolutionary algorithms (EAs). SIs generally simulate the intelligent behavior of groups of creatures, such as particle swarm optimization (PSO) [1], the artificial bee colony (ABC) algorithm [2], brain storm optimization (BSO) algorithm [37], and ant colony optimization (ACO) [8]. EAs are inspired by the concepts of evolution in nature, like genetic algorithm (GA) [9], differential evolution (DE) [1012], covariance matrix adaptation evolution strategy (CMAES) [13], and backtracking search optimization algorithm (BSA) [14].

The BSA was first introduced by Civicioglu in 2013, which is a population-based evolutionary algorithm for solving real-valued optimization problems. Unlike other evolutionary algorithms, the BSA has a memory for storing a population from a randomly chosen previous generation. The memory allows the BSA to take an advantage of experiences gained from the previous evolution process. Meanwhile, the BSA possesses two more advantages of a simple structure and only one control parameter (). These advantages make it become a promising newcomer among evolutionary algorithms. And so far, it has been successfully applied to digital image processing [15, 16], power systems [1719], energy and environmental management [2022], artificial neural networks [2326], induction motor [27, 28], antenna arrays [29, 30], and so on.

However, the BSA has a big shortcoming that its local exploitation capability is very poor. As a result, many improvement research studies have been developed to overcome the drawback. The major modified research studies include the following five categories: (1) modified BSA algorithms by developing hybridization mechanisms [3139], (2) modified BSA algorithms by developing novel initialization mechanisms [4046], (3) modified BSA algorithms by developing novel reproduction (mutation or crossover) mechanisms [4755], (4) modified BSA algorithms by developing novel selection mechanisms [5660], and (5) modified BSA algorithms by developing novel control mechanisms for parameters (including the control factor F which is randomly generated in the original BSA) [6169].

The reproduction mechanism is the core content of the BSA. The mutation operator of the original BSA is determined by Gauss coefficient of variation and historical population, so it has a good global exploration capability in the early stage. However, due to the lack of guidance of favorable factors, the convergence speed of the algorithm is relatively slow. Many researchers have modified the reproduction mechanisms of the BSA to improve the performance of the BSA. Vitayasak et al. [48] modified the BSA in three mechanisms: (i) applied a control mechanism after the crossover process, (ii) changed the process of generating mapped value, and (iii) combined (i) and (ii). Li et al. [4] modified the mutation process of the BSA by introducing the optimal learning evolutionary process. Zhao et al. [50] proposed an improved BSA method which combined with the differential evolution algorithm and the breeder genetic algorithm mutation operator for constrained optimization problems. Das et al. [51] combined BSA mutation operator with DE mutation strategy for interference suppression of linear antenna arrays. Chen et al. [52] developed a learning guidance strategy to modify the BSA mutation strategy. And two years later, Chen et al. [53] proposed a new mutation operator with knowledge learning and adaptive control parameter. Zhao et al. [54] introduced population control factor and optimal learning strategy into the mutation process of the BSA to improve the performance of algorithm. Yu et al. [55] proposed a multiple learning strategy to replace reproduction mechanism of the original BSA. These modified reproduction mechanisms all had satisfactory results in the BSA.

Inspired from the winner-tendency behavior and clusterity-tendency behavior of collective wisdom, this paper deeply mines the guided information of the best individual (as a winner) to improve the exploitation capability of the BSA and uses the mean value of individuals (as a clustering center) to balance the algorithmic overall performance. Along the motivation, this paper proposes a modified BSA, deep-mining backtracking search optimization algorithm guided by collective wisdom (MBSAgC). The main contributions of the proposed algorithm are as follows.(i)Deeply Mine the Winner-Tendency of Collective Wisdom. MBSAgC applies the winner-tendency behavior to design two strategies, i.e., a novel topological opposition-based learning and a linear combination, which generate offspring around the best individual of a current generation to enhance the exploitation capability. Especially, the topological opposition-based learning operator, which is the core innovation of the proposed algorithm, guides the algorithm to search the vertices in a hypercube about the best individual. Compared with the original opposition-based learning strategy, the new operator broadens its search region around the best individual.(ii)Appropriately Apply the Clusterity-Tendency of Collective Wisdom. For balancing the overall capability, MBSAgC also adds a learning strategy in the above linear combination. The strategy can guide individuals to learn from the mean value of the current generation.(iii)Keep the Simple Structure of the Original BSA. MBSAgC calculates only one function evaluation (FE) for an individual per generation and also do not add any parameter needed to set by users. It still keeps the simple structure of the original BSA.

The remainder of this paper is organized as follows. The basic BSA is introduced in Section 2. Section 3 describes in detail the proposed MBSAgC. Section 4 analyzes and discusses the novel topological opposition-based learning operator and the searching mechanism of MBSAgC. Simulation experiments are then performed in Section 5. Finally, conclusions are given in Section 6.

2. Basic BSA

BSA is one of population-based evolutionary algorithms. It consists of the following five basic operators: initialization, selection-I, mutation, crossover, and selection-II.

2.1. Initialization

BSA generates an initial population P and an old population OldP by the following formula:where , N is the population size, and , D is the problem dimension. U represents the uniform distribution, and are the lower boundary and upper boundary of the jth dimension of the ith individual, respectively.

2.2. Selection-I

The BSA always implements selection-I at the beginning of each iteration. This process aims to select a new OldP which is used for calculating the search direction. And the OldP is reselected through “if-then” rule as follows:

Both a and b are random numbers between 0 and 1. To ensure that the BSA has a memory, this operation reselects a population to be a new OldP. After this, the OldP is randomly permuted by using the following formula:

2.3. Mutation

The operation of mutation is used to generate the initial form of the trial population T, and the operator of this process is described as follows:where F is the control factor which controls the amplitude of the search direction; the value , where randn is a random number from 0 to 1 that submits to standardized normal distribution.

2.4. Crossover

The final form of the trial population T is constructed by the crossover process of the BSA. There are two strategies in this operation, and the BSA uses two strategies to achieve the selection of individual elements in each iteration. Both strategies generate a different map (binary integer-value matrix) to select the elements of individuals that have to be manipulated.

One strategy uses mix-rate (mix-rate parameter) to control the number of elements of individuals that will be handled by using (max-rate = 1 and is a random number on 0 to 1). The other strategy uses randi (D) to operate the only one element which is randomly selected from individuals, where randi is a random integration on 0 to D.

Both strategies are used equiprobably to handle the elements of individuals through the “if-then” rule: if , then, T is updated with . Otherwise, T is updated with , where and .

At the end of crossover process, if some individuals in T have overflowed the lower boundary or upper boundary, they will need to be regenerated by using Equation (1).

2.5. Selection-II

In this state, we select individuals with higher fitness values from population T and P according to a greedy selection mechanism, which is used to update the population P. If fitness value is better than , then is retained, whereas is replaced by . This process is described as follows:

3. Deep-Mining BSA Guided by Collective Wisdom (MBSAgC)

The collective wisdom here refers to the behaviors of a group of people learning from each other. Generally speaking, people tend to learn from the best individual. This behavior could be called a winner-tendency in this paper. Meanwhile, there are lots of compromises when people make serious decisions. These compromises could be called a clusterity-tendency behavior.

Inspired by the above two behaviors of collective wisdom, we designed three strategies to improve the overall performance of the BSA. The winner-tendency behavior is simulated to design two strategies, including a novel topological opposition-based learning operator and a linear combination strategy, which guide population to learn from the best individual of the current generation. The two strategies tend to improve the exploitation capability of the BSA. The rest of the strategy, which simulates the clusterity-tendency behavior to balance the algorithmic search, is designed to learn from the mean value of the current generation. And then, the three strategies are used to design an improved BSA, deep-mining backtracking search optimization algorithm guided by collective wisdom, abbreviated as MBSAgC.

Unlike the basic BSA, the proposed MBSAgC executes a function-optimized task by looping the topological opposition-based learning operator and an improved mutation operator for updating the population. The improved mutation operator is designed by integrating the last two strategies into the original mutation of the BSA.

3.1. Topological Opposition-Based Learning Operator (TOBL)

The topological opposition-based learning operator (TOBL), generated by deeply mining the winner-tendency behavior, is identified by the process of learning from the location of the best individual of a current population. It not only does not increase the computational overhead for function evaluations but also expands the searching space of opposite points. The proposed TOBL is defined as follows (the detailed analysis of TOBL will be given in the next section).

Definition 1. (topological opposition-based learning operator). Let be a point (vector) in a search space of D-dimensions, where , its opposite point isThen the topological opposite point of can be defined as follows:where is the jth dimension of the best individual of a population P.

3.2. Improved Mutation Operator

In the mutation of the basic BSA, the position of an individual is updated by the historical information and current information. The offspring do not get any knowledge from the best individual or the mean position of the population. MBSAgC proposes an improved mutation operator which guides individuals to learn from the best and the mean individuals of a population. The individuals using the improved mutation operator is described as follows:where is the mean individual of the population; denote two differential random numbers on 0 to 1.

The pseudocode for MBSAgC is shown in Algorithm 1.

Input:
Output: Globalminimum
(1)% is the maximum number of iterations. Globalminimum records the current minimum.
(2)% Initialization process of P and
(3)for do
(4)  for do
(5)    
(6)  end
(7)end
(8)for do
(9)  % Topological opposition-based learning (TOBL)
(10)  for do
(11)    for do
(12)      
(13)      if then
(14)        
(15)      end
(16)    end
(17)  end
(18)  % Selection-I
(19)  for do
(20)    if then
(21)      
(22)    end
(23)  end
(24)  
(25)  % Improved mutation
(26)  for do
(27)    
(28)  end
(29)  % Crossover
(30)  if then
(31)    for do
(32)      
(33)    end
(34)  end
(35)  for do
(36)    
(37)  end
(38)  
(39)  for do
(40)    for do
(41)      if then
(42)        
(43)      end
(44)    end
(45)  end
(46)  % Boundary control mechanism
(47)  for do
(48)    for do
(49)      if or then
(50)        
(51)      end
(52)    end
(53)  end
(54)  % Selection-II
(55)  for do
(56)    if then
(57)      
(58)      
(59)    end
(60)  end
(61)  if then
(62)    
(63)  end
(64)end

4. Analysis and Discussion of MBSAgC

In fact, there are many heuristic algorithms simulating the collective wisdom in different forms. The representative algorithms include collective decision optimization algorithm (CDOA) [70, 71], teaching-learning-based optimization (TLBO) [72], learning backtracking search optimization algorithm (LBSA) [52], and so on. Unlike these algorithms, MBSAgC proposes the TOBL operator to deeply mine the winner-tendency of collective wisdom and designs an improved mutation operator to merge multiple learning strategies. Especially, the TOBL operator broadens its search region around the best individual compared with the original opposition-based learning. Next, we analyze the search mechanism of TOBL and discuss the numerical performance of MBSAgC and BSA on several simple functions.

4.1. Search Mechanism of TOBL

The opposition-based learning (OBL) was originally proposed by Tizhoosh [73] in 2005. It has already attracted a recognizable interest. So far, about 500 papers have been published on the OBL concept. There are three articles summarizing the developmental stages of OBL. Al-Qunaieer et al. [74] firstly summarized the applications of OBL up to 2011. In 2012, Xu et al. [75] provided summative research on the basic concepts, theories, and applications of OBL in intelligent algorithms, typical fields, and a number of challenges since 2005. Mahdavi et al. [76] offered a global overview of research works on theories, developments, and applications of OBL from 2012 to 2018.

Inspired by the above research works on OBL, this paper proposes a new improved strategy, topological OBL (TOBL). Compared with OBL, TOBL holds two advantages on computational overhead and searching space.(i)Not Increase the Computational Overhead. As shown in Equation (7), TOBL evaluates the two candidates by comparing the distances between the two candidates and the current best individual on each corresponding dimension. Unlike OBL and the other improved OBL operators, TOBL thereby avoids the computational overhead for function evaluations.(ii)Expand the Searching Space. Figure 1 shows the sketch of TOBL on 1-dimension, 2-dimension, and 3-dimension, respectively. Take the 3-dimension sketch as example; is an original point. Its opposite point can be obtained according to Equation (6). However, according to Equation (7), the eight vertices of the cube are all the potential topological opposite points of TOBL. At this case with the position of as shown in Figure 1(c), the topological opposite points of is (marked in blue). In fact, TOBL has potential points, while OBL has only one for each original point (where D denotes the dimension of a search space).

Next, we simulate the original OBL and TOBL operators on the Sphere (2-dimension) and Rosenbrock (2-dimension) functions. As shown in Figure 2, the simulative results demonstrate that (1) TOBL operator cannot optimize symmetric functions (Sphere) as with the original OBL and (2) for asymmetric functions (Rosenbrock), TOBL can obtain better optimization effectiveness than the original OBL.

4.2. Contrastive Analysis between MBSAgC and BSA

In order to investigate the effectiveness of the modified strategies, we test the performance of MBSAgC and BSA on five unconstrained benchmarks with different characteristics. The five benchmark functions are Sphere, Schwefel 1.2, Rosenbrock, Rastrigin, and Griewank. The details of five functions are given in Table 1, where C, N, D, and denote characteristics, population sizes, dimensions, and maximum fitness evaluation times (FEs), respectively. “U,” “M,” “S,” and “N” are unimodal, multimodal, separable, and nonseparable, respectively. And in this section, as the iteration increases, several tests are implemented through comparing MBSAgC and BSA on the change of population P and the best function value, respectively. In the experiment, we use the same size of population (NP), dimension (D), and maximum (). The details of five functions and the parameter settings are given in Table 1. Figure 3 shows that the best function values for MBSAgC and BSA vary with FEs of the five benchmark functions, and Figure 4 represents that the means of population variances in MBSAgC and BSA vary with FEs.

According to the best function value in Figure 3, it can be seen that with the increase of the number of evaluations, the best function value of MBSAgC decreases faster than that of the BSA. It illustrates that the convergence rate of MBSAgC is faster than that of BSA. And Figure 3 shows that MBSAgC finds better function value than the BSA in all five benchmark functions. Figure 4 shows the change in population variances of MBSAgC and BSA. We can see from Figure 4 that in the early iteration process, with the increase of evaluation times, the population variance of MBSAgC and BSA began to decrease, and compared with BSA, MBSAgC decreases faster and tended to zero earlier. The results show that MBSAgC has a faster rate of convergence than BSA.

5. Numerical Experiments

In order to verify the performance of the proposed MBSAgC, two numerical experiments are executed in this section. The first experiment is tested on CEC2005 benchmark functions [77] (10-dimension and 30-dimension) and CEC2014 benchmark functions [78] (30-dimension). To prove the effectiveness of MBSAgC, in this experiment, LBSA, BSA, SaDE [79], CLPSO [80], PSOFIPS [81], TLBO, ETLBO [82], PSOFDR [83], BSAISA [68], OBSA, and some state-of-the-art algorithms are used as comparison algorithms.

The second experiment is tested on five engineering constrained optimization problems. They are the three-bar truss design problem (TTP), pressure vessel design problem (PVP), tension/compression spring design problem (TCSP), welded beam design problem (SRP), and speed reducer design problem (SRP), respectively. The results in the tables show the best function value (Best), the worst function value (Worst), the mean function value(Mean), the standard deviation (Std), and the fitness evaluation times (FEs). The FEs is used to evaluate the performance of MBSAgC. And we compare the results of MBSAgC with other well-known algorthms.(1)Parameter Setting. All the experiments are implemented on the MATLAB 2012a. For the first experiment, each comparison algorithm runs 30 times independently. The size of population (N) is set to 50, and the dimension variable (D) is 10 and 30, respectively. In addition, the stopping criterion for all algorithms depends on the maximum FEs, the maximum FEs is set to . For five engineering design problems, each actual problem needs to be run 50 times separately because those problems have different constraint conditions. Their parameters are almost different. For the TTP, we set , maximum iteration is 1000; for the PVP, we set , ; for the TCSP, we set , ; for the WBP, we set , ; and for the SRP, we set , .

5.1. Experiments on Benchmark Functions
5.1.1. Experimental Results

(1) On 10-Dimension Functions in CEC2005. Table 2 shows the simulation results of 10-dimension functions in CEC2005. Table 2 compares the mean, standard deviations, and the best solution of the ten algorithms. With respect to the best solution, the results of Table 2 show that the proposed algorithm MBSAgC ranks first in functions F1, F2, F4, F6, F9, F12, F15, F18–F20, F22, F24, and F25. LBSA is superior to other nine algorithms on function F3. For function F3, all compared algorithms could not find the theoretical optimal solution. The rank of MBSAgC in terms of best solution is eighth, it is worse than most of others, and the best solution of LBSA is closest to the theoretical optimal value, and the standard deviation of LBSA is also the smallest among ten algorithms. The best solution of PSOFDR is superior to other algorithms in functions F10, F11, and F14. The best solution of BSAISA algorithm is better than other nine algorithms in function F7. For function F1, all of the compared algorithms get the global optimal solution. For function F2, the best solution of CLPSO and OBSA is worse than the other eight algorithms. For function F4, BSAISA, OBSA, BSA, and CLPSO could not converge to the global optimum. MBSAgC can find the theoretical optimal solution on functions F1, F2, F4, F6, F9, F12, and F15. In Table 2, in terms of the optimal solutions of twenty-five functions, thirteen functions for the proposed algorithm MBSAgC rank first. The average ranking of the proposed algorithm is 2.76; moreover, MBSAgC ranks first in the best solution. To sum up, the performance of MBSAgC is competitive with other algorithms.

(2) On 30-Dimension Functions in CEC2005. In this section, the experiment of 30-dimension functions is carried out in CEC2005. The mean, standard deviations, and the best solutions of the error values are compared. In Table 3, all the algorithms are obtained after 30 independent runs; in addition, the optimal solution of the algorithm error value is sorted. From Table 3, it can be seen that MBSAgC is superior to other ten algorithms on F6, F7, F11, and F12 at D = 30. MBSAgC also performs best on functions F20, F21, and F23–F25. LBSA is the first ranking for the functions F1, F9, F20, F21, F23, and F24. SaDE is better than all other algorithms on functions F3, F4, F5, F10, and F16. OBSA ranks first on functions F18, F19, F21, and F23–F25. CLPSO and PSOFDR perform worse than the other nine algorithms on functions F21 and F24. The average ranking of MBSAgC is 2.56; it is also ranked first in all compared algorithms. The statistical results in Table 3 indicate that MBSAgC has a better global performance than the other algorithms, and MBSAgC has a better capability to get the best solution than other compared algorithms in 30 runs. According to the results in Table 3, it is obvious to find that MBSAgC performs better than all other comparison algorithms on 30-dimension functions in CEC2005.

(3) On 30-Dimension Functions in CEC2014. There are 30 benchmark functions in CEC2014. It is difficult to get the global optimal solution because their global optimum is shifted. In this part, we compare the mean, standard deviations, and the best solution of the error values between the actual solution value and the theoretical optimal value and rank the best error value of each function, and the statistical results of each comparison algorithm are given in Table 4. For all 30 functions, MBSAgC outperforms the other nine algorithms for functions F4, F5, F14, F19, and F21. PSOFIPS performs best on functions F6, F29, and F30. PSOFDR is superior to other algorithms on functions F11, F15, and F16. CLPSO performs a little unsatisfactory on all functions except function F26. LBSA ranks first in terms of the best for functions F2, F3, F8, F9, F10, F13, F17, F20, and F22. CBSA performs better than other algorithms for functions F12, F23, F27, and F28. For function F7, in terms of the best, MBSAgC and LBSA obtain the same function value; they are better than other algorithms. All algorithms get the same value in function F26 except CLBSA. For function F24 and F25, CBSA, TLBO, ETLBO, and DGSTLBO obtain the same value, and they are better than other six algorithms. The average ranking of three comparative criterions is also listed in the table. As we can see, LBSA is the best in all three criterions; MBSAgC performs slightly worse in best and Std than LBSA, but it is better than other algorithms. Overall, MBSAgC’s rank is still very competitive in CEC2014.

5.1.2. Statistical Analyses for Benchmark Results

Sign tests [84] is a common method to determine whether there is significant difference between two algorithms. In this paper, we use the two-tailed sign tests which has a significance level of 0.05 to test the differences between the results obtained by MBSAgC and other compared algorithms. The statistical results are listed in Table 5. The best solution is used as the criterion of the sign test. The signs “,” “,” and “” represent that MBSAgC performs better, almost the same, and worse than other compared algorithms, respectively. And the null hypothesis in this paper is that the performances of MBSAgC and one of the other compared algorithms are not significantly differential.

Thirteen algorithms are compared in the benchmark experiment. Table 5 lists the analytical test results of all the algorithms. The P values of supporting the null hypothesis of sign test for ten compared algorithms (MBSAgC-CLPSO, MBSAgC-PSOFIPS, MBSAgC-TLBO, MBSAgC-ETLBO, MBSAgC-PSOFDR, MBSAgC-OBSA, MBSAgC-BSA, MBSAgC-CBSA, MBSAgC-CLBSA, and MBSAgC-DGSTLBA) are lower than significance level 0.05, so we can reject the null hypothesis. And it is illustrated that the optimization performance of MBSAgC is significantly better than those of the other ten algorithms. The P values of MBSAgC-LBSA, MBSAgC-BSAISA, and MBSAgC-SaDE are higher than 0.05; they are 0.619, 0.474, and 0.717, respectively, so we cannot reject the null hypothesis. However, it does not mean that MBSAgC is much worse than them. In the light of sign value from Table 5, for MBSAgC-LBSA, MBSAgC is worse than LBSA on thirty-four functions, and it performs better on twenty-eight functions; moreover, MBSAgC performs better on both 10D and 30D in CEC2005. MBSAgC is slightly worse than BSAISA on fourteen functions, but better than BSAISA on nineteen functions. For MBSAgC-SaDE, the number of “” is the same as the number of “,” but MBSAgC is better than SaDE in comprehensive ranking.

5.2. Experiments on Engineering Design Problems

In this part, we test MBSAgC in five well-known engineering design problems: three-bar truss design problem(TTP), pressure vessel design problem(PVP), tension/compression spring design problem(TCSP), welded beam design problem(WBP), and speed reducer design problem(SRP), respectively. The details of five design problems can be found in Appendix A. And the results in Tables 615 show the performance of MBSAgC and its competitive with other compared algorithms.

5.2.1. Engineering Problem and Experimental Results

(1) Three-Bar Truss Design Problem(TTP). We compare MBSAgC with the BSA, differential evolution with dynamic stochastic selection(DEDS) [85], hybrid evolutionary algorithm(HEAA) [86], hybrid particle swarm optimization with differential evolution(POS-DE) [87], differential evolution with level comparison (DELC) [88], and mine blast algorithm(MBA) [89]. The results of all eight algorithms are showed in Tables 6 and 7. And the best solution obtained by MBSAgC is with using 9540 FEs. In Table 6, we can see that only MBA does not get the best solution with the corresponding function value; the other five algorithms all reach the best solution with the corresponding function value equal to 263.895483. MBSAgC is not only lower than other six algorithms in term of FEs but also Std value is the lowest in all algorithms. Table 6 shows the comparison of the best solutions obtained by MBSAgC and other algorithms. The statistical results of all compared algorithms are listed in Table 7.

From Tables 6 and 7, all of the compared algorithms reach the best solution equal to 263.89583 except MBA with 263.89582. However, MBSAgC requires the lowest FEs (9540) among all algorithms. Its Std value is also better than all compared algorithms. These comparative results illustrate that MBSAgC outperforms other algorithms in terms of computational overhead and robustness for this problem.

Figure 5(a) shows the convergence curves of MBSAgC and BSA for three-bar truss design problem. As shown in Figure 5(a), the BSA reaches the global optimum at about 13,000 FEs, while MBSAgC only reaches the global optimum at about 9000 FEs. It is illustrated that convergence rate of MBSAgC is faster than the BSA for this problem.

(2) Pressure Vessel Design Problem (PVP). There is a nonlinear objective function in the pressure vessel design problem, and the function has three linear and one nonlinear inequality constrains and two discrete and two continuous design variables. For this problem, BSA, BSA-SAε, DELC, PSO-DE, genetic algorithms based on dominance tournament selection (GA-DT) [90], modified differential evolution (MDE) [91], coevolutionary particle swarm optimization (CPSO) [92], hybrid particle swarm optimization (HPSO) [93], and artificial bee colony algorithm (ABC) are used as comparison algorithms. And the best solution obtained by MBSAgC is with using 16180 FEs. It is the lowest FEs in all compared algorithms. Table 8 shows the comparison of the best solutions obtained by MBSAgC and other algorithms. The statistical results of all compared algorithms are listed in Table 9.

As shown in Table 8, MBSAgC, BSA-SAε, ABC, DELC, and HPSO reach the same function value equal to 6059.7143, which is worse than the function value 6059.7017 of MDE, but all seven algorithms get the feasible solution of this design problem. From Table 9, the solution of MDE ranks first in terms of the best solution; the best solution of MBSAgC is better than those of GA-DT, CPSO, ABC, and BSA. Though MDE is better than MBSAgC in terms of the best solution, the FEs of MBSAgC is much lower than that of MDE. It is illustrated that MBSAgC is competitive with other algorithms on this engineering problem.

Figure 5(b) describes the convergence curves of MBSAgC and BSA for the pressure vessel design problem. As shown in Figure 5(b), MBSAgC finds the global optimum at about 16,000 FEs and obtains a far more accurate function value than BSA. Moreover, the convergence speed of MBSAgC is much faster than that of BSA.

(3) Tension/Compression Spring Design Problem (TCSP). There are three continuous variables and four nonlinear inequality constrains in the tension/compression spring design problem. MBSAgC obtains the best feasible solution at with using 22680 FEs. And we compare the results obtained by MBSAgC with GA-DT, MDE, CPSO, HPSO, DEDS, HEAA, DELC, PSO-DE, ABC, MBA, BSA-SAε, BSAISA, and social spider optimization (SSOC) [94]. The comparison of the best solutions obtained by all comparison algorithms is presented in Table 10, and the statistical results of various algorithms are listed in Table 11.

In Table 10, only GA-DT and CPSO fail to get the best solution for this problem; other compared algorithms all reach the best solution value 0.012665. MBSAgC requires 22680 FEs to get the global best solution. It is worse than MBA (FEs : 7650) and DELC (FEs : 20000), but its Std is better than both of them. It can be summarized that MBSAgC outperforms other compared algorithms except MBA and DELC; in addition, the robustness of MBSAgC is much better than others, except HEAA.

Figure 5(c) depicts the convergence curves of MBSAgC and BSA for the tension/compression spring design problem. From Figure 5(c), it can be observed that both MBSAgC and BSA are prone to fall into a local optimum in the early iterations, but they are able to jump out of local optimum successfully. However, the convergence speed of MBSAgC is obviously faster than that of BSA.

(4) Welded Beam Design Problem (WBP). The welded beam design problem is a minimum-cost problem, which has four continuous design variables and subjects to two linear and five nonlinear inequality constraints. For this problem, MBSAgC is compared with GA-DT, MDE, CPSO, HPSO, DELC, PSO-DE, ABC, MBA, BSA, BSA-SAε, SSOC, and BSAISA. The comparison of the best solutions obtained by all comparison algorithms is presented in Table 12, and the statistical results of various algorithms are listed in Table 13.

As can be seen from Table 13, MBSAgC has obtained the best feasible solution at with using 21640 FEs. Only GA-DE, CPSO, and MBA fail to find the best function value 1.724852; the other compared algorithms all find the best function values. DELC outperforms other algorithms in both Std and FEs. MBSAgC performs better than other algorithms except DELC in terms of FEs. It is illustrated that MBSAgC is competitive on computational cost, while the robustness of MBSAgC is really bad in terms of its Std vlaue (2.765E − 01), which is worse than those of most existing comparison algorithms.

Figure 5(d) depicts the convergence curves of MBSAgC and BSA for the welded beam design problem. Figure 5(d) shows that the convergence speed of MBSAgC is faster than that of BSA remarkably.

(5) Speed Reducer Design Problem (SRP). The speed reducer design problem consists of eleven constraints and seven continuous design variables, where is an integer variable. The best solution obtained by MBSAgC is with using 12580 FEs. We compared the results of MBSAgC with 10 well-known algorithms: BSAISA, BSA, MDE, DEDS, HEAA, DELC, PSO-DE, ABC, MBA, and SSOC. The compared best solutions and the statistical results of all comparison algorithms are represented in Tables 14 and 15, respectively.

As seen in Tables 14 and 15, MBSAgC, BSAISA, BSA, DEDS, and DELC get the same optimal values equaling to 2294.471066. Other algorithms have different optimal values, but all the function values they get are feasible. MBA requires the lowest FEs (6300) to get the global optimal solution. MBSAgC has much lower FEs than other compared algorithms except MBA. Though MBA has the lowest FEs, it fails to find the best solution for this problem. This experiment is to test whether the algorithm can find the optimal solution with the minimum computational cost. Obviously, MBSAgC conforms to the experimental requirements because it requires so low FEs to find the best solution for this problem.

Figure 5(e) depicts the convergence curves of MBSAgC and BSA for the speed reducer design problem and shows that the convergence speed of MBSAgC is faster than that of BSA.

5.2.2. Statistical Analyses for Engineering Results

In order to further analyze the comprehensive performance of the proposed MBSAgC, we list the optimal feasible solutions and the minimum FEs for all the abovementioned engineering design problems, and compare them with the proposed MBSAgC. The statistical results are shown in Table 16. As can be seen from Table 16, MBSAgC can find the optimal solution of four engineering design problems. The feasible solution obtained by MBSAgC is the optimal for three-bar truss problem, and the FEs of MBSAgC is also the lowest; it is illustrated that the results of MBSAgC are competitive. For pressure vessel design problem, the feasible solution found by MBSAgC is the same as most of compared algorithms, but the FEs of MBSAgC is the minimum among all the compared algorithms. For the rest three problems, MBSAgC can find the optimal feasible solution; although the FEs of MBSAgC is not the smallest, its ranking among the all compared algorithms is competitive. Furthermore, the MBA is with a smaller FEs, but it cannot find out the current best value on speed reducer problem. So overall, about the ability to find the optimal solution and the algorithmic computational overhead, MBSAgC holds a very competitive performance on these optimization problems compared with the state-of-the-art algorithms.

5.3. Synthetical Discussions Based on Experimental Results

Based on the above experimental results on CEC2005, CEC2014, and several engineering optimization problems, this subsection discusses the search behaviors of the proposed algorithm from three perspectives including its solution accuracy, convergence speed, and future development.

5.3.1. Solution Accuracy

The “Best” values of all the previous tables are the reflection of the solution accuracy for each algorithm in solving all the test functions (or engineering optimization problems). The smaller this “Best” value, the higher the solution accuracy is. For 10-dimension functions of CEC2005, the average ranking of MBSAgC’s “Best” value equals to 2.76 (shown in Table 2) which ranks first in all the compared algorithms. For 30-dimension functions of CEC2005, the average ranking of MBSAgC also is first (2.56, shown in Table 3). For 30-dimension functions of CEC2014, its average ranking is second (2.63, shown in Table 4). And MBSAgC can find the current optimum of four of the five engineering optimization problems. Overall, the solution accuracy of MBSAgC is obviously superior to the other compared algorithms.

5.3.2. Convergence Speed

The “FEs” values of all the previous tables are the reflection of the convergence speed for each algorithm in solving all the test functions (or engineering optimization problems). The smaller this “FEs” value, the faster the convergence speed is. In addition, each curve of the previous convergence figures represents the variation of the current optima (or error values) over the fitness evaluation times. The variation trend of the curve also reflects the convergence speed of the compared algorithms. For the five engineering optimization problems (shown in Table 16), two of “FEs” values of MBSAgC rank first, two of them rank second, and the rest one ranks third. Similar to all the compared algorithms, MBSAgC solves all the benchmark functions within the given “FEs”. And Figures 35 show that MBSAgC holds faster convergence speed than the original BSA. Overall, the proposed MBSAgC keeps as a strong competitiveness on the convergence speed as the compared state-of-the-arts algorithms.

5.3.3. Future Development

According to the above discussions, we can draw a conclusion that the proposed MBSAgC improves the solution accuracy on a condition of keeping competitive convergence speed. This indicates that the modified strategies based on the collective wisdom, especially the winner-tendency, do enhance the exploitation capability of MBSAgC. However, we also notice that MBSAgC performs worse than LBSA on the CEC2014 benchmark functions. This also leaves a space for further development of the topological opposition-based learning operator (TOBL) which is the core of all the modified strategies of MBSAgC.

6. Conclusions

In order to enhance the exploitation capability of BSA, this paper proposes an improved BSA algorithm, deep-mining backtracking search optimization algorithm guided by collective wisdom (MBSAgC). MBSAgC designs three learning mechanisms by deeply mining collective wisdom. The three mechanisms are assembled into two novel reproduction operators, i.e., a topological opposite-based learning operator (TOBL) and an improved mutation operator. The searching mechanisms of the novel TOBL operator and MBSAgC are analyzed and discussed in detail. The theoretical analysis demonstrates that the TOBL operator searches the vertices in a hypercube about the best individual. This broadens its search region of the original opposition-based learning strategy. In addition, the proposed algorithm does not increase any parameter to be artificial set. The numerical results on CEC2005, CEC2014 benchmark functions, and five engineering constrained design problems demonstrate the superiority of the proposed algorithm. The advantages of the searching mechanisms and experimental results also illustrate that TOBL is promising as an improved operator to enhance the exploitation capability of EAs.

Appendix

A Engineering Design Problems

A.1. Three-Bar Truss Design Problem

A.2. Pressure Vessel Design Problem

A.3. Tension/Compression Spring Design Problem

A.4. Welded Beam Design Problem

where

A.5. Speed Reducer Design Problem

where

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the State Key Laboratory of Biogeology and Enviromental Geology (China University of Geosciences, No. GBL21801), the National Nature Science Foundation of China (No. 61972136), and the Hubei Provincial Department of Education Outstanding Youth Scientific Innovation Team Support Foundation (No. T201410).